Front-End Developer SEO Handbook

October 12, 20218 min read

The first rule of search engine optimization is to create really good content. The second rule of search engine optimization is to create really fast user-friendly websites. If human beings don't want to engage with your concept, search engines don't want to either. 

--

We live in the age of machine learning and quantum computing. You can't just put a bunch of keywords into a page and expect to do well. 

When Google first hit the market in the late 1990s, it relied on an algorithm called PageRank, which weighted relevance and search ranking based on the number of inbound links a website had.

People quickly learned how to take advantage of the algorithm by sending backlinks all over the Internet to increase the page rank of the site.

Because a high ranking in Google can be worth millions of dollars. It brought us an entire industry of SEO experts: The good guys wear white hats, hackers wear black hats but the most effective ones were gray hats. 

Some people say it's a dying industry… Because it’s becoming harder and harder to manipulate Google’s technology. There are over 200 factors that go into a site’s ranking that are geared mostly towards how useful a user found your site. 

Did they immediately bounce by clicking the back button or did they dwell on the page for a long time and click other links absorbing all kinds of useful content? 

Content is king but the third rule of search engine optimization is to render HTML that can be reliably understood by bots.

Your main content goes inside the body tags, when Google crawls your site it will use semantic HTML elements to understand the content on the page. You might put your main content in an article tag then put your most important keywords and headings or H1, H2, H3.. tags to signal what your page is about. 

Your HTML should be accessible; Alt tags, images, and aria tags were needed to make your site usable on assistive devices in the head of the document. Also, we have metadata that’s not directly shown to the end-user.

But bots can use this data to further understand the page and format the actual appearance of your search engine listing.

Create Awesome Content

The first two rules are very subjective and completely dependent on your audience, but the overall goal is that;

When someone clicks a website link on a search engine ranking page, they should be interacting with your website for as long as possible.

There are a few metrics that you want to be aware of here: 

The first one is click-through rate or CTR that defines how likely a user is to click on your link when displayed in a search engine ranking page or SERP. 

The higher the CTR the better and that usually means you have a very relevant title and description. 

Now if a user clicks on your link and then immediately clicks the back button that's called a bounce and the higher your bounce rate is the less likely your site is to rank well in the long term. Because apparently, the content on the page is not very relevant.

If the user stays on the page, Google will track the time spent there which is the amount of time they spent there before clicking on the search result again. The best thing that can happen is that the user never clicks back and their session will last forever.

So what you keep track of is the average session duration and the average number of pages per session; these are metrics that you want to maximize.

HTML Meta Tags

We’ve only been looking at the body of the document but the head of the document contains all kinds of useful metadata for SEO. Most importantly this is where you have the title. You should choose your title carefully because it's displayed on a SERP page and will ultimately control your CTR rating.

One of the goals of this article is to explain which meta tags can potentially help your SEO rankings and which have mostly fallen out of use. 

There are four major types of meta tags worth knowing. 

Others are worth using regularly, and will very likely increase your traffic by letting Google know who you are and what you provide

  • Meta Keywords Attribute - A series of keywords you deem relevant to the page in question.
  • Title Tag - This is the text you'll see at the top of your browser. Search engines view this text as the "title" of your page.
  • Meta Description Attribute - A brief description of the page.
  • Meta Robots Attribute - An indication to search engine crawlers (robots or "bots") as to what they should do with the page.

There are other ways you can add metadata to your content and this is very important for SEO and also accessibility. 

One of the most fundamental techniques is to add an alt attribute to images which is just some text that describes the image. This metadata can be used by search engines and also by screen readers for those with disabilities for elements that are a little more complicated.

Load Content Fast

There are three fundamental ways to render HTML. 

The first one will look at is client-side rendering: If you're building an app with something like react or angular the default mode is client-side rendering or a single page application on the initial page load the user gets a shell of HTML without any meaningful content the JavaScript coding then bootstraps and then asynchronously fetches any additional data needed for the UI. Apps like this are great for interactivity because the end-user of an app feels similar to what you expect on iOS or Android. 

The problem is that the initial HTML is just a shell, search engines may have a hard time understanding and indexing it. 

If you take a link generated by JavaScript from a single page application and post it into Twitter you only see the initial shell you won't see any additional meta tags that were generated by JavaScript after the fact that's not great for social media. But Google has a search engine that can index client-rendered apps. But the reliability is questionable.

So another option is to pre-render or statically generate HTML. We could generate all the HTML for those pages in advance then upload the static files to a storage bucket that could be cached on a global CDN, so the first thing that the user sees is fully rendered content then the JavaScript. 

That's great for SEO because bots get fully rendered HTML and easily interpret the content on the page. If you're fetching data from the database you only have to do that once in build time. You can cache the page on a CDN and serve it to millions of people without having to refresh. 

The trade-off with this approach though is that the data and the pre-rendered content could become stale which means bots will be getting outdated information until you rebuild and redeploy the entire site. 

That's no big deal if you have a few hundred pages that don't change very often. 

But if you have millions of pages with highly dynamic data then it doesn't scale and that brings us to option number three: 

Server-Side Rendering, in this paradigm, when the user requests a page the HTML is generated on the server. This is also great for SEO because bots get fully rendered HTML on the initial request in addition the data will always be fresh because you're making a new request to the server each time.

But the drawback here is that it’s generally less efficient you might be fetching and rendering the same HTML over and over again. It is possible to do server-side caching but that's not as efficient as edge caching on a CDN and will cost a lot more to operate at scale and if things aren't cached efficiently that means a slower first time to meaningful content which can negatively impact SEO.

So basically between these three methods, we have a trade-off between data freshness, performance, and client-side interactivity. But what if there is a way we could have our cake and eat it too... 

Incremental Static Regeneration: Next.js allows you to create or update static pages after you've built your site. Incremental Static Regeneration (ISR) enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages.

That means you get all of the performance benefits of static pages while ensuring that these pages always contain fresh data that eliminates all the trade-offs that we talked about but incremental static regeneration would require a more complex back-end server deployment. In my opinion, this is the future of full-stack web development.

Conclusion

In this article, I explained what a front-end developer needs to know about SEO and how different website structures work with it. Thanks for reading through.