How to Export AI-Generated Metadata from Moments Lab
Moments Lab uses MXT-2 multimodal AI to generate time-coded metadata from video content. Here's how to get that metadata into a spreadsheet for quality assessment.
Moments Lab does not have a built-in CSV export. Metadata generated by MXT-2 is returned as non-proprietary JSON through the Moments Lab API, which can be converted to CSV. If you use Moments Lab alongside a MAM system (AVID, NINA, Arvato), the AI-generated metadata is pushed to that system and can be exported from there.
Moments Lab API (Just Index)
The Moments Lab public API provides access to all MXT-2 indexed metadata in a non-proprietary JSON format. You can retrieve AI-generated descriptions, tags, transcript segments, detected entities, and custom taxonomy labels for all indexed media, then flatten the JSON to CSV for MQS analysis.
api.momentslab.com.import requests
import csv
import json
# Moments Lab API credentials
API_BASE = "https://api.momentslab.com/v1"
API_KEY = "your_api_key"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
def list_indexed_media():
"""Retrieve all indexed media with MXT-2 metadata."""
all_media = []
page = 1
while True:
resp = requests.get(
f"{API_BASE}/media",
headers=HEADERS,
params={"page": page, "limit": 100},
)
data = resp.json()
items = data.get("items", [])
if not items:
break
all_media.extend(items)
page += 1
if len(items) < 100:
break
return all_media
media = list_indexed_media()
# Flatten to CSV
with open("moments_lab_metadata.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow([
"Media ID", "Title", "Description", "Tags",
"Detected People", "Detected Logos", "Transcript",
"Duration", "Created", "Custom Labels",
])
for item in media:
writer.writerow([
item.get("id", ""),
item.get("title", ""),
item.get("description", ""),
", ".join(item.get("tags", [])),
", ".join(item.get("people", [])),
", ".join(item.get("logos", [])),
item.get("transcript", ""),
item.get("duration", ""),
item.get("created_at", ""),
", ".join(item.get("custom_labels", [])),
])
print(f"Exported {len(media)} items to moments_lab_metadata.csv")MAM Integration Export
If you use Moments Lab alongside a MAM system (AVID, NINA, Arvato, Bitcentral), MXT-2 indexed metadata is pushed to your MAM automatically. You can then use your MAM system's built-in export tools to extract the enriched metadata as a CSV or Excel file.
What metadata fields can you export?
| Field | API Export | MAM Integration |
|---|---|---|
| AI-generated description | ✓ | ✓ |
| Tags / keywords | ✓ | ✓ |
| Detected people | ✓ | If mapped |
| Detected logos | ✓ | If mapped |
| Detected landmarks | ✓ | If mapped |
| Transcript (speech-to-text) | ✓ | If mapped |
| Soundbites / key quotes | ✓ | If mapped |
| Custom classifier labels | ✓ | ✓ |
| Time codes | ✓ | ✕ |
| Confidence scores | ✓ | ✕ |
| Duration | ✓ | ✓ |
| File format / codec | ✓ | ✓ |
| Created / modified dates | ✓ | ✓ |
- JSON to CSV conversion: Moments Lab returns metadata as JSON. Nested fields (arrays of detected entities, time-coded segments) need to be flattened for CSV. The script above handles basic flattening, but complex nested structures may need custom mapping.
- Shot-level granularity: MXT-2 indexes at the shot and scene level. If you export at this granularity, your CSV will have many more rows than source videos. For MQS scoring, aggregating to one row per video is recommended unless you want to assess metadata quality at the scene level.
- Thesaurus-controlled terms: Custom classifiers use terms defined in the Moments Lab Thesaurus tool. These controlled terms produce consistent vocabulary in your export. If vocabulary appears inconsistent, review and consolidate Thesaurus entries.
You have your metadata export.
Now score it.
Upload your CSV or Excel file to MQS and get a structural metadata health score out of 100 with dimension breakdowns and actionable diagnostics.