The first rule of search engine optimization is to create really good content. The second rule of search engine optimization is to create really fast user-friendly websites. If human beings don't want to engage with your concept, search engines don't want to either.
We live in the age of machine learning and quantum computing. You can't just put a bunch of keywords into a page and expect to do well.
When Google first hit the market in the late 1990s, it relied on an algorithm called PageRank, which weighted relevance and search ranking based on the number of inbound links a website had.
People quickly learned how to take advantage of the algorithm by sending backlinks all over the Internet to increase the page rank of the site.
Because a high ranking in Google can be worth millions of dollars. It brought us an entire industry of SEO experts: The good guys wear white hats, hackers wear black hats but the most effective ones were gray hats.
Some people say it's a dying industry… Because it’s becoming harder and harder to manipulate Google’s technology. There are over 200 factors that go into a site’s ranking that are geared mostly towards how useful a user found your site.
Did they immediately bounce by clicking the back button or did they dwell on the page for a long time and click other links absorbing all kinds of useful content?
Content is king but the third rule of search engine optimization is to render HTML that can be reliably understood by bots.
Your main content goes inside the body tags, when Google crawls your site it will use semantic HTML elements to understand the content on the page. You might put your main content in an article tag then put your most important keywords and headings or H1, H2, H3.. tags to signal what your page is about.
Your HTML should be accessible; Alt tags, images, and aria tags were needed to make your site usable on assistive devices in the head of the document. Also, we have metadata that’s not directly shown to the end-user.
But bots can use this data to further understand the page and format the actual appearance of your search engine listing.
The first two rules are very subjective and completely dependent on your audience, but the overall goal is that;
When someone clicks a website link on a search engine ranking page, they should be interacting with your website for as long as possible.
There are a few metrics that you want to be aware of here:
The first one is click-through rate or CTR that defines how likely a user is to click on your link when displayed in a search engine ranking page or SERP.
The higher the CTR the better and that usually means you have a very relevant title and description.
Now if a user clicks on your link and then immediately clicks the back button that's called a bounce and the higher your bounce rate is the less likely your site is to rank well in the long term. Because apparently, the content on the page is not very relevant.
If the user stays on the page, Google will track the time spent there which is the amount of time they spent there before clicking on the search result again. The best thing that can happen is that the user never clicks back and their session will last forever.
So what you keep track of is the average session duration and the average number of pages per session; these are metrics that you want to maximize.
We’ve only been looking at the body of the document but the head of the document contains all kinds of useful metadata for SEO. Most importantly this is where you have the title. You should choose your title carefully because it's displayed on a SERP page and will ultimately control your CTR rating.
One of the goals of this article is to explain which meta tags can potentially help your SEO rankings and which have mostly fallen out of use.
There are four major types of meta tags worth knowing.
Others are worth using regularly, and will very likely increase your traffic by letting Google know who you are and what you provide
There are other ways you can add metadata to your content and this is very important for SEO and also accessibility.
One of the most fundamental techniques is to add an alt attribute to images which is just some text that describes the image. This metadata can be used by search engines and also by screen readers for those with disabilities for elements that are a little more complicated.
There are three fundamental ways to render HTML.
The problem is that the initial HTML is just a shell, search engines may have a hard time understanding and indexing it.
That's great for SEO because bots get fully rendered HTML and easily interpret the content on the page. If you're fetching data from the database you only have to do that once in build time. You can cache the page on a CDN and serve it to millions of people without having to refresh.
The trade-off with this approach though is that the data and the pre-rendered content could become stale which means bots will be getting outdated information until you rebuild and redeploy the entire site.
That's no big deal if you have a few hundred pages that don't change very often.
But if you have millions of pages with highly dynamic data then it doesn't scale and that brings us to option number three:
Server-Side Rendering, in this paradigm, when the user requests a page the HTML is generated on the server. This is also great for SEO because bots get fully rendered HTML on the initial request in addition the data will always be fresh because you're making a new request to the server each time.
But the drawback here is that it’s generally less efficient you might be fetching and rendering the same HTML over and over again. It is possible to do server-side caching but that's not as efficient as edge caching on a CDN and will cost a lot more to operate at scale and if things aren't cached efficiently that means a slower first time to meaningful content which can negatively impact SEO.
So basically between these three methods, we have a trade-off between data freshness, performance, and client-side interactivity. But what if there is a way we could have our cake and eat it too...
Incremental Static Regeneration: Next.js allows you to create or update static pages after you've built your site. Incremental Static Regeneration (ISR) enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages.
That means you get all of the performance benefits of static pages while ensuring that these pages always contain fresh data that eliminates all the trade-offs that we talked about but incremental static regeneration would require a more complex back-end server deployment. In my opinion, this is the future of full-stack web development.
In this article, I explained what a front-end developer needs to know about SEO and how different website structures work with it. Thanks for reading through.