In the previous part, we developed a glossary of SEO terms covering basic or fundamental definitions necessary for developing your website's positioning strategy on search engines. In this second part, we delve into more advanced concepts that will help you become a SEO expert.
In this dictionary, our experts Blas Giffuni and Camilo Ramírez define and explain in a practical way those words that are part of the SEO jargon, allowing you to learn about concepts that will be useful for the proper positioning of your website. Let's dive in!
Crawler, Spider or Bot The crawler, also known as a spider or bot, is software that traverses the web, analyzing content and providing information to improve search engine performance. Essentially, it's how search engines review websites, gathering information on their composition and structure to send it to the search engine for indexing.
What components does the crawler or spider analyze on a website?
URLs or links.
Textual content (keywords, corpus, information structure, etc.).
HTML tags.
Images and/or videos.
Robots.txt
When creating a new website, it's important for Google or any other search engine to access it for crawling and indexing. To inform search engines that they can access your website, you need to create a text file with the .txt extension, specifying which parts of your website you want to be crawled and indexed, and which ones you don't want to appear in search engine results pages. Robots.txt is essential for website indexing. Therefore, it should be programmed correctly to prevent accidentally categorizing parts of the website as non-indexable when they should be crawled.
Metadata
Metadata, defined as "data about data," provides structured and straightforward information about a website's data. Metadata describes the information behind other data. Metadata elements are HTML elements that describe the website containing them. Their purpose is to provide search engines with information about the type of content offered and the topics covered. For example, common metadata for a webpage includes:
Meta Title: the clickable title when a page appears in search engine results.
Meta Description: an HTML tag used to briefly describe the content offered on a webpage. This information appears below the meta title and URL in search engine results. It's recommended to keep it between 140 and 160 characters.
Videos, images, and links also contain metadata.
Structured Data
Structured data refers to standardized tags that provide search engines with additional information about a website's content to facilitate their work. It also helps generate a better user experience through the creation of rich snippets, which display information more attractively and accurately in search engine results pages. In summary, structured data presents search engines with data about a website's offerings in a functional way, following common standards. By organizing data in this way, crawlers can identify it more easily and know what to do with that information. The most commonly used data structuring is established by Schema, a collaborative community that develops schemas for structured data on the internet. Data structuring varies depending on whether products, services, or other content are offered.
Sitemap
A sitemap is your website's map and is how you inform search engines about its contents. What is the difference between sitemap.html and sitemap.xml?
Sitemap.html: a document that humans can read and navigate from the website.
Sitemap.xml: a document whose structure is in XML, specifying the URL's location, its priority or importance within the domain, when it was last modified, and its location.
Core Web Vitals
Core Web Vitals are a set of metrics that Google analyzes to assess the user experience when navigating a website, which are taken into account to define its positioning. Announced by Google in 2020, they confirm that, for Google, user experience is a priority beyond algorithms. Specifically, three aspects are analyzed, which can have a high impact on the user experience:
Loading time: measured by a metric known as Largest Contentful Paint (LCP), which measures the perceived loading speed.
Interactivity: measured by the First Input Delay (FID), which measures the delay in the first interaction.
Visual stability: measured by the Cumulative Layout Shift (CLS), referring to the amount of unexpected changes in the layout of visible content.
Google is saying with this: "I don't like a website that takes too long to load." "I also don't like it when a website has loaded, but the user can't easily or quickly interact with it." "I don't like it when the website's layout changes after loading."
Google Search Console
According to Google, Search Console is "a free service that helps you monitor, maintain, and troubleshoot your website's presence in Google search results." It's the tool Google uses to communicate website owners and/or administrators with their websites' diagnostics. Through it, webmasters are notified about errors on a site, its positioning, click-through rate, among other metrics. This tool is crucial because it provides direct information from Google.
JavaScript
JavaScript is a widely used programming language in website development that works across different platforms. This language helps make the website more interactive and dynamic. However, it's known as the "SEO bogeyman" because poor implementation of JavaScript can make search engines unable to find any content on our website. Nevertheless, it can be a powerful tool that, when used correctly, increases conversion rates on our website.
That's it for the second part of our SEO glossary or dictionary. We invite you to keep reading our blog to learn more about search engine positioning!
Bibliography