The goal of SEO specialists should not be to become experts in JS; rather, they should anticipate developing a firm grasp on how web pages are read and rendered, which will enable them to view JS as a friend rather than an adversary on the road. This post’s purpose is to discuss optimizing content crawling and avoiding indexing issues.
- Rendered content
- Lazy-loaded images
- Page load times
- The HTML is first downloaded by Googlebot.
- There are two types of files that Googlebot downloads: JS and CSS. The JS and CSS files are then sent to Google’s Web Rendering Service.
- WRS goes through the resources and stores the information it finds in a list.
- Caffeine, Google’s indexer, crawls the page and adds it to the site.
- It lets Google bots find new pages because links are added to the crawling queue.
Angular by Google
React by Facebook
Vue by Evan You
It’s not in the scope of this article to go over all of the different rendering options that are out there (for example, pre-rendering). Because of this, we’ll go over the most common ways to make your site look better for search engines and users:
- Server-side rendering
- Dynamic rendering
That means that web pages are rendered on the server before they’re sent to the client (browser or crawler) rather than on the client itself. It is also called rendering on the client’s side or “client-side rendering.”
- During the first HTML response, each and every thing that is important to search engines is there.
- It lets you load your First Contentful Paint quickly (“FCP”).
- Slow “Time to First Byte” (TTFB) because the server needs to make web pages that look different every time.
Dynamic Rendering is the method by which a server answers in a different way based on who made the request in the first place. When a crawler examines a website, the server renders the HTML and returns it to the client; however, when a visitor accesses a page, the visitor must rely on client-side rendering to complete their task.
This rendering option is a quick fix and should only be used as a last resort in extreme circumstances. However, Google does not consider dynamic rendering to be cloaking (opens in a new tab) as long as the dynamic rendering generates identical information for both types of queries.
- As soon as a search engine crawler asks for a page, the first HTML answer they get includes every element that the search engine cares about.
- Most of the time, it’s easier and faster to do something.
- It complicates debugging issues.
What about social media crawlers?
Social media crawlers, like those from Twitter, Facebook, LinkedIn, and Slack, also need simple HTML access in order to get useful information.
There must be a way to find the page’s OpenGraph or Twitter Card markup, or there must not be any. If they can’t find them, they won’t be able to make a snippet with the page title and meta description in it. So, your snippet will look unprofessional, and you won’t get much attention from these social networks.
The appearance of this page, when viewed in a browser, is that of a typical web page. We have the ability to view text, photographs, and hyperlinks on the page. Take a closer look at the code from the inside out, however.
SEO problems could happen in the following ways: People need important information, but search engine robots don’t. That could be very difficult! If search engines can’t get through your whole page, your website could get lost in the shuffle. In the future, we’ll talk more in-depth about this.
Googlebot can handle slow page loading, but it doesn’t “scroll” through your web pages in the same way that a human would. Instead, when Googlebot is looking at a website’s content, it just widens the virtual viewport that it is using. It means that the “scroll” event listener won’t work, and the crawler won’t show any content.
For example, consider this piece of code that is more SEO-friendly:
With the help of the IntersectionObserver API, you can learn how to receive notifications when an observed element becomes visible. On the other hand, it is more robust and customizable than the on-scroll event listener and is supported by the most recent version of Googlebot. This code works because when Googlebot “sees” your content, the viewport of the bot is enlarged to accommodate it.
For eCommerce businesses that rely on online conversions, not having their products indexed by Google could spell disaster for their business.
- Make use of Google’s Webmaster Tools to get a picture of the page. That way, you can look at a website from Google’s point of view instead of your own.
- Debugging can be done with the help of Chrome’s built-in developer tools. Use the source code and the rendered code to compare what Google sees and what people see. Make sure they are almost the same.
Server-side rendering:- This means that for each request, JS is executed on the server. SSR can be implemented using a Node.js library such as Puppeteer. That, however, might place a significant amount of demand on the server.
Hybrid rendering:- As the name suggests, it is hybrid; it is a combination of both the side server as well as the client side. The core content is sent to the server side first before it goes to the client side.
Wrapping it up