Guides/CMS Indexing Guide

Drupal Google Indexing: The Complete Guide to Getting Your Drupal Content Into Search

Drupal is one of the most powerful CMS platforms available, used by governments, universities, and large enterprises. But its flexibility means indexing is not automatic — you need to install and configure the right modules to ensure Google can discover and index all your content.

Updated: Apr 1, 2026

Drupal is a content management framework known for its flexibility, security, and scalability. It powers some of the web's most complex sites: government portals with thousands of pages, university websites with multiple departments and content types, enterprise intranets, and large media publications. Drupal's strength is that it can be configured to handle virtually any content architecture, from a simple blog to a multi-language, multi-site platform with hundreds of content types and complex access permissions.

This flexibility comes at a cost: Drupal does not include SEO features out of the box. Unlike WordPress (which has a built-in sitemap and basic SEO), Shopify (which auto-generates sitemaps and canonical tags), or Ghost (which includes comprehensive SEO by default), Drupal requires you to install and configure contributed modules for every SEO function. XML sitemap generation, URL aliases, meta tag management, redirect handling, and robots.txt customization all require separate modules.

The core set of Drupal SEO modules — often called the "Drupal SEO stack" — includes XML Sitemap (or Simple XML Sitemap) for sitemap generation, Pathauto for automatic URL aliases, Metatag for meta tag management, Redirect for 301 redirect handling, and the robots.txt module for controlling crawler access. Without these modules, Drupal generates URLs like /node/123 (opaque numeric paths), has no XML sitemap, produces no meta descriptions, and has a basic robots.txt that you can only edit at the filesystem level.

This guide walks through every module and configuration needed to make a Drupal site fully indexable by Google. Whether you are running Drupal 10, Drupal 11, or still maintaining a Drupal 9 site, the modules and concepts are largely the same. We cover installation, configuration, common pitfalls specific to Drupal's architecture, and advanced techniques for large-scale Drupal installations.

IndexBolt gets your URLs crawled by Google in under 24 hours — no manual submissions, no waiting weeks.

The Drupal SEO Module Stack

A properly configured Drupal site for search engine indexing requires a specific set of contributed modules. Each module handles a different aspect of SEO, and they work together to provide comprehensive indexing support.

Simple XML Sitemap (or the older XML Sitemap module) generates your /sitemap.xml file. It allows you to specify which content types, taxonomy vocabularies, menus, and custom entity types should be included in the sitemap. You can set priorities and change frequencies for each type. Simple XML Sitemap is the modern choice and supports Drupal 10 and 11 natively, while the older XML Sitemap module may have compatibility issues with newer Drupal versions.

Pathauto generates human-readable URL aliases automatically based on configurable patterns. Instead of /node/123, Pathauto creates URLs like /blog/my-article-title based on token patterns you define. Pathauto depends on the Token module for dynamic URL generation. Without Pathauto, every Drupal page is only accessible at its internal node path, which is terrible for both users and SEO.

The Metatag module provides a UI for configuring meta tags (title, description, canonical, robots, Open Graph, Twitter Cards, and more) at the global, content type, and individual node level. It supports token replacement, so you can create patterns like "[node:title] | [site:name]" for title tags. Without the Metatag module, Drupal generates only a basic <title> tag with no meta description, no canonical URL, and no social meta tags.

The Redirect module manages 301 redirects. When you change a URL alias (either manually or through Pathauto), the Redirect module can automatically create a redirect from the old URL to the new one. It also provides a UI for creating manual redirects and can fix common redirect issues like trailing slash inconsistencies and mixed-case URLs.

The robots.txt module (or the RobotsTxt module) allows you to manage your robots.txt file through the Drupal admin interface instead of editing the file directly on the server. This is important because Drupal's robots.txt is a static file in the docroot, and changes can be overwritten during core updates.

Configuring URL Aliases with Pathauto

URL aliases are fundamental to Drupal SEO. Drupal's internal paths (/node/123, /taxonomy/term/45) are technically crawlable and indexable, but they provide no keyword information and create an opaque site structure. Pathauto solves this by automatically generating URL aliases based on patterns you define.

To configure Pathauto, install it via Composer (composer require drupal/pathauto), enable it at /admin/modules, and then configure patterns at /admin/config/search/path/patterns. For each content type, create a pattern using tokens. Common patterns include /blog/[node:title] for blog articles, /[node:content-type]/[node:title] for multiple content types, /products/[node:field_product_category:entity:name]/[node:title] for hierarchical product URLs, and /[node:menu-link:parents:join-path]/[node:title] for menu-hierarchy-based URLs.

Pathauto uses the Token module to resolve these patterns. It automatically lowercases titles, replaces spaces with hyphens, removes special characters, and truncates long URLs. You can customize the transliteration settings at /admin/config/search/path/settings to control how special characters, accents, and non-Latin scripts are handled.

A critical Pathauto setting is the "Update action" for existing aliases. When you change a node's title, should Pathauto update the URL alias? If it does, the old URL breaks unless the Redirect module is installed. The recommended configuration is: "Create a new alias. Leave the existing alias functioning" — this creates a new alias while keeping the old one active. Combined with the Redirect module, the old URL automatically redirects to the new one.

For existing sites with thousands of /node/123 URLs already indexed by Google, Pathauto can bulk-generate aliases for all existing content. Go to /admin/config/search/path/update_bulk and select which content types to process. After bulk generation, install the Redirect module to create automatic redirects from /node/123 paths to the new aliases. This ensures Google follows the redirects and updates its index to the clean URLs.

Skip the manual work — IndexBolt submits URLs directly to Google's crawl queue. Start with 100 free credits.

100 free credits. No credit card required.

Metatag Module Configuration for Indexing

The Metatag module is essential for controlling how Google sees your pages. Install it via Composer (composer require drupal/metatag) and enable the Metatag module along with its submodules: Metatag: Open Graph, Metatag: Twitter Cards, and Metatag: Verification (for search console verification tags).

Configure global defaults at /admin/config/search/metatag. The global configuration sets fallback meta tags for any page that does not have more specific configuration. Set the global title to "[current-page:title] | [site:name]", the description to "[node:summary]" (or leave empty to avoid generic descriptions), and the canonical URL to "[current-page:url]".

Then create content-type-specific overrides. For blog articles, you might set the title to "[node:title] - Blog | [site:name]" and the description to "[node:field_meta_description]" (using a dedicated meta description field you add to the content type). For product pages, use product-specific tokens. For taxonomy term pages, use "[term:name] - [vocabulary:name] | [site:name]" as the title pattern.

The Metatag module also controls the robots meta directive per content type and per individual node. For content types that should not be indexed (like admin-only content types, webform confirmation pages, or user profile pages), set the robots meta to "noindex, follow" at the content type level. This prevents Google from indexing those pages while still following their links.

For individual nodes, content editors can override the meta tags in the "Meta tags" fieldset on the node edit form. Train your editors to write custom meta descriptions for important content — the auto-generated summary is often not optimized for search. If you want to enforce meta description writing, make the field required through form display configuration or use a custom validation handler.

The Metatag module also supports hreflang tags for multilingual sites through the Metatag: hreflang submodule. If your Drupal site is multilingual (using the core Translation module), enable hreflang and configure it to automatically generate hreflang tags linking all language versions of each page.

XML Sitemap Generation and Configuration

Install Simple XML Sitemap via Composer (composer require drupal/simple_sitemap) and enable it. Configure it at /admin/config/search/simplesitemap. The module allows you to create multiple sitemaps (useful for large sites with different content sections) and select which entity types, bundles, and specific entities to include.

For most Drupal sites, include published nodes of content types that should be indexed (articles, pages, products), published taxonomy term pages that have substantial content, and any custom entity types that generate public pages. Exclude content types that are administrative, behind access control, or inherently thin (like webform submissions or user profiles).

Simple XML Sitemap supports priority and changefreq settings per content type. While Google has stated it largely ignores these hints, setting them helps organize your sitemap and signals your content's relative importance. Set homepage priority to 1.0, main content types to 0.8, and secondary content like taxonomy terms to 0.5.

The module generates the sitemap at /sitemap.xml by default and splits it into multiple files when the URL count exceeds the configured limit (default 2000, maximum 50000 per sitemap spec). For large Drupal sites with tens of thousands of pages, the sitemap generation can be resource-intensive. Configure cron to regenerate the sitemap during off-peak hours and set the regeneration interval appropriately (every 6-24 hours for most sites).

A common issue with Drupal sitemaps is entity access. Drupal's permission system can prevent the sitemap generator from accessing nodes that are viewable by anonymous users but not by the cron user. The Simple XML Sitemap module generates the sitemap during cron execution, using the anonymous user's permissions by default. If your nodes require specific permissions to view, verify that anonymous users can access them, or configure the module to generate the sitemap using a different user's permissions.

After configuring the sitemap, submit it to Google Search Console at Sitemaps > Add a new sitemap > yourdomain.com/sitemap.xml. Monitor the sitemap status for errors like 404 URLs (deleted nodes still in sitemap), blocked URLs (robots.txt conflicts), and redirect URLs (nodes with changed aliases).

Drupal's Node Paths, Aliases, and Duplicate Content

Drupal has a unique duplicate content problem that other CMS platforms do not share: content is accessible at both the internal path (/node/123) and the URL alias (/blog/my-article). Without proper configuration, Google can index both URLs, creating duplicate content.

The first defense is the Redirect module's "Enforce clean and canonical URLs" setting. When enabled, visiting /node/123 for a node that has an alias /blog/my-article results in a 301 redirect to the alias. This tells Google that the alias is the canonical URL and the node path should be ignored. Enable this setting at /admin/config/search/redirect/settings.

The second defense is the canonical URL meta tag from the Metatag module. Even if a node is somehow accessed at its internal path, the canonical tag in the HTML <head> points to the alias URL. Google respects canonical tags and consolidates ranking signals to the canonical URL.

The third defense is the robots.txt file. Add Disallow: /node/ to your robots.txt to prevent Google from crawling internal node paths entirely. This is a belt-and-suspenders approach that, combined with redirects and canonical tags, ensures node paths never get indexed.

Drupal also creates potential duplicates through taxonomy term pages. If you have a vocabulary called "Categories" with a term called "Technology," Drupal creates a page at /taxonomy/term/5 (and, with Pathauto, an alias like /categories/technology) that lists all nodes tagged with that term. If the taxonomy term page has no unique introductory content — just a list of node teasers that appear elsewhere — Google may classify it as thin or duplicate content.

Views-generated pages add another layer of complexity. Drupal's Views module can create pages that list content by various criteria, with pagination. A View listing all blog posts with 10 per page creates URLs like /blog?page=1, /blog?page=2, etc. Each paginated page has similar content (just different posts). Without rel="next" and rel="prev" pagination tags (which Google deprecated but some SEO practitioners still use) or a noindex directive on paginated pages, Google may waste crawl budget on deep pagination pages with diminishing content value.

Drupal Caching and Its Impact on Meta Tags

Drupal has one of the most sophisticated caching systems of any CMS, and while this is generally a performance benefit, it can cause SEO headaches if not understood properly.

Drupal's page cache (Internal Page Cache module for anonymous users, Dynamic Page Cache for authenticated users) stores fully rendered HTML pages. When you update a node's meta tags through the Metatag module, the cached HTML may continue serving the old meta tags until the cache is cleared. For sites with aggressive caching (TTL of hours or days), this means SEO changes can be invisible to Google for an extended period.

The solution is to understand Drupal's cache tag system. When you edit a node, Drupal's cache tag system should automatically invalidate any cached page containing that node's content. This works correctly when the meta tag changes are part of the node edit (using the Metatag module's per-node override). However, changes to global or content-type-level meta tag defaults may not trigger cache invalidation for all affected pages. After changing global meta tag settings, manually clear the site's page cache at /admin/config/development/performance > Clear all caches.

External caching layers add another dimension. If your Drupal site is behind Varnish, a reverse proxy cache, or a CDN like Cloudflare, you have an additional cache layer that does not know about Drupal's cache tags. After clearing Drupal's internal cache, you also need to clear the external cache. For Varnish, use the Varnish Purge module to integrate Drupal's cache tag system with Varnish's purge mechanism. For Cloudflare, use the Cloudflare module to automatically purge CDN cache when Drupal content changes.

The Purge module and its associated modules (Purge Queuer, Purge Processor, and platform-specific plugins) provide a unified interface for managing cache invalidation across all layers. For SEO purposes, the key requirement is: when a meta tag changes on any page, every cached version of that page (Drupal internal cache, Varnish, CDN) must be invalidated so that Google's next crawl sees the updated tags.

Be particularly cautious with authenticated vs. anonymous caching. Drupal can serve different content to authenticated users (editors, admins) vs. anonymous users (including Googlebot). If your Metatag configuration includes conditions based on user roles, make sure the anonymous version has the correct meta tags. Test by logging out (or using an incognito window) and viewing the page source to verify the meta tags Google will see.

Step-by-Step Guide

1

Install the core SEO module stack

Use Composer to install the essential SEO modules: composer require drupal/simple_sitemap drupal/pathauto drupal/metatag drupal/redirect drupal/token. Then enable them through the Drupal admin at /admin/modules or via Drush: drush en simple_sitemap pathauto metatag metatag_open_graph redirect. These modules provide the foundation for sitemap generation, URL aliases, meta tag management, and redirect handling. Verify each module is enabled and has no dependency errors at /admin/reports/status.

2

Configure Pathauto URL alias patterns

Go to /admin/config/search/path/patterns and create a URL pattern for each content type. For articles, use a pattern like /blog/[node:title]. For pages, use /[node:title]. For products (if applicable), use /products/[node:title] or /products/[node:field_category:entity:name]/[node:title] for category-based hierarchy. After creating patterns, go to /admin/config/search/path/update_bulk and bulk-generate aliases for all existing content. Then install and enable the Redirect module to create automatic 301 redirects from internal node paths (/node/123) to the new aliases.

3

Configure the Metatag module for all content types

Navigate to /admin/config/search/metatag and configure meta tag defaults. Set the global title pattern to [current-page:title] | [site:name] and the canonical URL to [current-page:url]. Then add content-type-specific overrides: for each content type, click "Add" and configure the title, description, canonical URL, and robots directives. Set the description to [node:field_meta_description] if you have a dedicated field, or [node:summary] as a fallback. For content types that should not be indexed (webforms, internal pages), set the robots meta to noindex, follow.

4

Configure and generate the XML sitemap

Go to /admin/config/search/simplesitemap and configure which entity types to include. Enable published nodes for all public content types, enable taxonomy terms for vocabularies with substantial content, and exclude user profiles, webform submissions, and other non-public entities. Set sitemap priorities (homepage 1.0, main content 0.8, secondary content 0.5). Click "Generate" to create the sitemap immediately, then visit /sitemap.xml to verify it contains the expected URLs. Configure the cron interval for automatic regeneration.

5

Configure robots.txt to block internal paths

Edit your Drupal site's robots.txt file (in the docroot) and add rules to block internal paths that should not be crawled. Add Disallow: /node/ (blocks internal node paths), Disallow: /admin/ (blocks admin pages), Disallow: /user/ (blocks user profile and login pages), and Disallow: /search/ (blocks Drupal's internal search results). Add Sitemap: https://yourdomain.com/sitemap.xml at the bottom. If you use the RobotsTxt module, manage these rules at /admin/config/search/robotstxt instead of editing the file directly.

6

Submit sitemap and verify in Google Search Console

Add your Drupal site to Google Search Console. For verification, use the Metatag: Verification submodule to add the Google verification meta tag at /admin/config/search/metatag > Global > Verification. After verification, go to Sitemaps in Google Search Console and submit your sitemap URL. Monitor the sitemap report for errors. Common errors include URLs returning 403 (permission issues), URLs returning 301 (aliases redirecting from node paths), and URLs with soft 404s (nodes with empty content). Fix each error category before resubmitting.

7

Submit priority pages through IndexBolt for faster indexing

After completing the module installation and configuration, Drupal sites often have a backlog of pages that need indexing, especially after a migration or major restructure. Export your sitemap URLs and identify pages that are not yet indexed using Google Search Console's Pages report. Submit the highest-priority pages through IndexBolt — focus on your main landing pages, most valuable content nodes, and any pages with incoming backlinks that need to maintain their search visibility. Drupal's clean, server-rendered HTML makes it an excellent candidate for IndexBolt's indexing pipeline.

Done with the manual steps? Speed things up.

IndexBolt submits your URLs directly to Google — most get crawled in under 24 hours.

Common Issues & How to Fix Them

Node paths (/node/123) and URL aliases creating duplicate content

Cause: Drupal content is accessible at both the internal node path (/node/123) and the URL alias (/blog/my-article). Without the Redirect module's canonical URL enforcement, Google can discover and index both URLs, creating duplicate content that splits ranking signals between two URLs for the same page.

Fix: Install and enable the Redirect module, then go to /admin/config/search/redirect/settings and enable "Enforce clean and canonical URLs." This creates automatic 301 redirects from /node/123 to the URL alias. Also configure the Metatag module to output canonical URLs pointing to the alias. For belt-and-suspenders protection, add Disallow: /node/ to robots.txt to prevent Google from crawling internal paths entirely.

Views pagination creating hundreds of thin pages

Cause: Drupal's Views module generates paginated lists that create URLs like /blog?page=1, /blog?page=2, through /blog?page=50. Each paginated page contains a small subset of content (typically 10-25 items per page) and each page looks similar from Google's perspective. Deep pagination pages (page 10 and beyond) have very little discovery value and waste crawl budget.

Fix: For Views with deep pagination, add a noindex directive to paginated pages beyond the first page. You can do this with the Metatag module's token system or with custom code that detects the page parameter. Alternatively, use the "Load more" pattern (infinite scroll or "Load more" button) instead of traditional pagination, which keeps all content on a single URL. Reduce the total number of paginated pages by increasing the items-per-page count in your View settings.

Taxonomy term pages with thin content getting indexed

Cause: Drupal's taxonomy system creates a page for every term in every vocabulary. Terms with few tagged nodes result in thin archive pages — a page with one or two content teasers and no unique text. Google may index these thin pages but rank them poorly, diluting your site's overall quality signals.

Fix: Add description content to taxonomy terms that serve as category pages. In the taxonomy term edit form, write 100-300 words of unique content describing the topic. Configure your taxonomy term template to display this description prominently. For vocabularies where term pages should not be indexed at all (like internal tagging vocabularies), set the Metatag defaults for that vocabulary's terms to noindex. Remove thin taxonomy term pages from your sitemap by excluding the vocabulary in the Simple XML Sitemap configuration.

Drupal permissions blocking anonymous access to published content

Cause: Drupal's granular permission system can inadvertently prevent anonymous users (including Googlebot) from accessing published content. This happens when the "View published content" permission is removed from the anonymous role, when content access modules (Content Access, Node Access) restrict viewing by role, or when field-level permissions hide content from anonymous users.

Fix: Go to /admin/people/permissions and verify that the anonymous user role has the "View published content" permission enabled for all content types that should be indexed. If you use content access modules, audit their configuration to ensure published nodes are accessible to anonymous users. Test by logging out completely and visiting your content pages — if you see an access denied page, the permissions are wrong. Also verify that the Simple XML Sitemap module can access the content during cron (it runs as the anonymous user by default).

Caching serving stale meta tags after SEO changes

Cause: Drupal's aggressive page caching (Internal Page Cache, Dynamic Page Cache, external Varnish or CDN) stores fully rendered HTML including meta tags. When you update meta tags through the Metatag module — especially global or content-type-level defaults — the cached HTML may continue serving the old meta tags for hours or days, depending on your cache TTL configuration.

Fix: After making meta tag changes at the global or content-type level, clear all caches at /admin/config/development/performance > Clear all caches. If you use Varnish, purge the Varnish cache as well (ban req.http.host == "yourdomain.com" in Varnish CLI). If you use Cloudflare or another CDN, purge the CDN cache through its dashboard or API. Verify the updated meta tags by visiting a page in an incognito browser window and viewing the source. For ongoing reliability, install the Purge module to automate cache invalidation across all layers.

Module conflicts causing duplicate or missing meta tags

Cause: Multiple Drupal modules may attempt to generate the same meta tags. For example, a theme might output its own title tag, the Metatag module generates another, and a custom module adds a third. Similarly, the SEO Checklist module, Google Analytics module, or other contrib modules may inject their own meta tags that conflict with Metatag module output.

Fix: View the page source of several content pages and search for duplicate meta tags — look for multiple <title> tags, multiple <meta name="description"> tags, or multiple <link rel="canonical"> tags. If duplicates exist, identify which module generates each one. Disable the duplicate source — typically by removing the meta tag output from the theme template (check html.html.twig and page.html.twig) and relying solely on the Metatag module for all meta tag generation. The Metatag module should be the single source of truth for all SEO-related meta tags.

Pro Tips

Use the SEO Checklist module (drupal/seo_checklist) as a comprehensive audit tool for your Drupal site's SEO configuration. It provides a checklist of every SEO-related module, setting, and configuration you should have in place. While it does not fix issues automatically, it helps you identify gaps in your SEO setup that you might have missed.
For large Drupal sites with custom content types, create a dedicated "SEO Title" and "SEO Description" field on each content type rather than relying on the node title and body summary. This separates editorial titles (which might be creative or long) from SEO titles (which should be keyword-optimized and concise). Bind the Metatag module to these dedicated fields for the most control.
Drupal's JSON:API module (included in core since Drupal 9) exposes your content as JSON endpoints. While these are not intended for search engines, they can get indexed if linked. Add X-Robots-Tag: noindex headers to JSON:API responses using a custom middleware or .htaccess rule. The same applies to REST endpoints if the RESTful Web Services module is enabled.
For multilingual Drupal sites, ensure the Metatag: hreflang submodule is enabled and properly configured. Drupal's translation system creates separate node entities for each language version, and the hreflang tags must correctly connect all translations. Verify by viewing the source of a translated page and checking that hreflang tags list all available language versions with correct URL paths.
When migrating to Drupal from another CMS, use the Migrate module suite to preserve your old URL structure wherever possible. After migration, run a crawl comparison between the old sitemap and the new one to identify any URLs that changed. Create redirects for every changed URL using the Redirect module's CSV import feature (/admin/config/search/redirect/import) to bulk-add hundreds of redirects at once.
Drupal's Views module can generate RSS feeds that serve as additional discovery paths for Google. Create a feed display on your main content Views and link to it from your site's <head> using <link rel="alternate" type="application/rss+xml">. Submit the feed URL alongside your sitemap in Google Search Console for redundant content discovery.

Drupal sites are often large, complex, and mission-critical. Whether you have thousands of nodes waiting for Google to discover them or you just completed a major migration with new URL structures, IndexBolt can push your most important pages directly into Google's indexing pipeline. Stop waiting for cron-triggered sitemaps and natural crawl cycles — submit your Drupal URLs through IndexBolt and get them indexed within hours.

100 free credits. No credit card required. See results in under 24 hours.

Frequently Asked Questions

Does Drupal have built-in SEO features?+

Drupal core provides the basic building blocks — clean HTML output, a configurable <title> tag, URL aliasing (since Drupal 8 core includes basic alias management), and the ability to serve static files like robots.txt. However, Drupal does not include an XML sitemap, meta description management, automatic canonical URLs, structured data, or redirect management out of the box. These features are provided by contributed modules (Simple XML Sitemap, Metatag, Redirect, Pathauto) that must be installed and configured separately.

Which Drupal modules do I need for SEO?+

The essential Drupal SEO module stack includes: Simple XML Sitemap for sitemap generation, Pathauto for automatic URL aliases, Token (required by Pathauto), Metatag for meta tag management, Redirect for 301 redirect handling, and optionally the RobotsTxt module for managing robots.txt through the admin. For multilingual sites, add the Metatag: hreflang submodule. Install all modules via Composer and enable them through /admin/modules or Drush.

How do I prevent /node/123 paths from being indexed?+

Use a three-layer approach: (1) Install the Redirect module and enable "Enforce clean and canonical URLs" to automatically 301 redirect /node/123 to the URL alias. (2) Configure the Metatag module to set canonical URLs to the alias path. (3) Add Disallow: /node/ to your robots.txt to prevent Google from crawling internal node paths. Together, these three measures ensure Google only sees and indexes your clean URL aliases, never the internal node paths.

How do I handle URL changes during a Drupal migration?+

Before migration, document all URLs on your current site by exporting your sitemap. After migration to Drupal, configure Pathauto to generate aliases matching your old URL structure where possible. For URLs that must change, create 301 redirects using the Redirect module. You can bulk-import redirects via CSV at /admin/config/search/redirect/import. After setting up redirects, resubmit your sitemap to Google Search Console and monitor the Pages report for 404 errors that indicate missing redirects.

Why are my Drupal Views pages not getting indexed?+

Views pages may not get indexed for several reasons: the Views page URL is not included in your sitemap (add it manually in Simple XML Sitemap's custom links), the page content is too similar to other pages (paginated views with overlapping content), the page has an incorrect meta robots tag (check the Metatag module configuration for the Views display), or the page requires permissions that anonymous users do not have. Test the Views page URL in Google Search Console's URL Inspection tool to diagnose the specific issue.

Can I use IndexBolt with a Drupal site behind basic authentication?+

IndexBolt needs to access your public URLs, so any basic HTTP authentication (common on Drupal staging environments) must be removed for production pages you want indexed. If your staging site has basic auth, that is fine — staging should not be indexed. Your production site should be publicly accessible without any authentication barrier. Drupal's content access permissions are separate from HTTP authentication — as long as anonymous users can view published nodes, IndexBolt can submit those URLs for indexing.

Ready to get your URLs indexed?

Start with 100 free credits. No credit card required.