Use this workflow when a research, sales, support, or monitoring system needs tweets as rows. Xquik can search tweets by keyword, hashtag, author, language, date range, media type, or engagement filters withDocumentation Index
Fetch the complete documentation index at: https://docs.xquik.com/llms.txt
Use this file to discover all available pages before exploring further.
tweet_search_extractor, estimate the cost first, run the extraction job, then export CSV, JSON, or XLSX.
When to use this workflow
| Need | Use |
|---|---|
| Export tweet search results for spreadsheets | tweet_search_extractor plus CSV or XLSX export |
| Feed matching tweets into an app | tweet_search_extractor plus paginated JSON results |
| Read the latest search page only | GET /x/tweets/search |
| Control cost before scraping | resultsLimit on estimate and create requests |
Data you get
Tweet search exports include base user fields, tweet fields, engagement counts, and metadata when available.| Data group | Fields |
|---|---|
| Tweet author | User ID, username, display name, follower count, verified state, profile image |
| Tweet | Tweet ID, tweet text, tweet created time, permalink |
| Engagement | Likes, reposts, replies, quotes, views, bookmarks |
| Metadata | Language, source app, conversation ID, attached media |
Step 1: Estimate tweets and credits
CallPOST /extractions/estimate before scraping tweets. tweet_search_extractor requires searchQuery. Add resultsLimit when you want a sample or a hard cost cap.
allowed, estimatedResults, creditsRequired, creditsAvailable, and source. For tweet search scraping, source is resultsLimit when a cap is set and unknown when no cap is set.
Step 2: Run the tweet search extraction
Create the job with the sametoolType, searchQuery, filters, and optional resultsLimit.
Step 3: Poll job status
PollGET /extractions/{id} until the job is completed or failed.
Step 4: Export CSV, JSON, or XLSX
Exports are free after the extraction job exists. Use CSV for spreadsheets, JSON for app ingestion, and XLSX for analyst handoff.Direct tweet search API
UseGET /x/tweets/search when you need a paginated API response instead of a stored extraction job.
tweets, has_next_page, and next_cursor. Pass next_cursor back as cursor to fetch the next page. It costs 1 credit per tweet returned.
Handoff checklist
| Handoff | Use |
|---|---|
| Spreadsheet | Export format=csv or format=xlsx |
| App ingestion | Export format=json or paginate GET /extractions/{id} |
| Cost control | Set resultsLimit on both estimate and create calls |
| Live monitoring | Create an account or keyword monitor with signed webhooks |
Related: Extraction workflow · Create extraction · Search tweets