How Twitter Crawls Your Pages

Understanding Twitterbot and how it fetches your link previews

The Twitterbot crawler

When someone shares a URL in a tweet, Twitter fetches the page with its crawler (Twitterbot/1.0) and pulls out the meta tags for the card preview. The crawler doesn’t execute JavaScript, so your OG tags need to be in the raw HTML that the server returns.

SPAs need server rendering

If your meta tags are injected by client-side JS (React, Vue, etc.), Twitterbot won’t see them. You need SSR, pre-rendering, or at minimum static meta tags in your HTML shell.

Response requirements

Twitterbot expects a 200 OK with Content-Type: text/html. It follows redirects (301, 302), but long redirect chains may cause it to bail. Pages behind auth, paywalls, or IP restrictions won’t produce cards.

Rate limiting

If your server rate-limits Twitterbot, cards fail silently and the tweet just shows a bare URL. Make sure robots.txt allows Twitterbot and your rate limiting rules don’t apply to crawler user agents.

Testing crawler access

Simulate Twitterbot with curl:

curl -A "Twitterbot/1.0" https://example.com/your-page

If your <meta> tags show up in the response, you’re good.