10 SEO Tips You’d Be Surprised You Didn’t Know by @texasgirlerin

The SEO tips below are a blend of analytics, organization, & productivity practices that seems to be frequently forgotten during chats with other marketers.

The post 10 SEO Tips You’d Be Surprised You Didn’t Know by @texasgirlerin appeared first on Search Engine Journal.

Advertisements

Search In Pics: GoogleBot In Snow, Google Pet Toys & John Mueller In Star Wars

In this week’s Search In Pictures, here are the latest images culled from the web, showing what people eat at the search engine companies, how they play, who they meet, where they speak, what toys they have and more. World’s Largest Android Marshmallow Mosaic:

Google John Mueller Star Wars:

Google John Mueller Star Wars

 

Source: Google+

 

Google Pet Toy:

Google Pet Toy

 

Source: Google+

 

GoogleBots Playing In The Snow:

GoogleBots-SNow

 

Source: Google+

 

The post Search In Pics: GoogleBot In Snow, Google Pet Toys & John Mueller In Star Wars appeared first on Search Engine Land.

Why Everyone Should Be Moving To HTTP/2

HTTP/2

If I told you that your website could load faster, your server could use fewer resources, your developers wouldn’t have to waste time on hacks to increase site speed and you’d get a boost to your rankings all from one simple change, you’d probably call me a liar. If it sounds too good to be true, then it must be, right?

Wrong! The future is here with one of the greatest advancements in web technology in the past 20 years, and the SEO community doesn’t seem to be talking about it.

When Barry Schwartz posted a recap of a recent Google Webmaster Central Hangout in which Google’s John Mueller said that GoogleBot will support HTTP/2 by the end of this year or early next year, I expected a mad scramble and people shouting from the rooftops. Instead, there were crickets throughout the SEO industry.

You should already have switched to HTTP/2 for many reasons, including a tremendous speed increase, which makes for a better user experience, but now there are potential ranking factors on the line, as well.

What Is HTTP/2?

HTTP/2 is the latest update to the HTTP protocol by the Internet Engineering Task Force (IETF). The protocol is the successor to HTTP/1.1, which was drafted in 1999. HTTP/2 is a much-needed refresh, as the web has changed over the years. The update brings with it advancements in efficiency, security and speed.

Where Did HTTP/2 Come From?

HTTP/2 was based largely on Google’s own protocol SPDY, which will be deprecated in 2016. The protocol had many of the same features found in HTTP/2 and managed to improve data transmission while keeping backwards compatibility. SPDY had already proven many of the concepts used in HTTP/2.

Major Improvements In HTTP/2

  • Single Connection. Only one connection to the server is used to load a website, and that connection remains open as long as the website is open. This reduces the number of round trips needed to set up multiple TCP connections.
  • Multiplexing. Multiple requests are allowed at the same time, on the same connection. Previously, with HTTP/1.1, each transfer would have to wait for other transfers to complete.
  • Server Push. Additional resources can be sent to a client for future use.
  • Prioritization. Requests are assigned dependency levels that the server can use to deliver higher priority resources faster.
  • Binary. Makes HTTP/2 easier for a server to parse, more compact and less error-prone. No additional time is wasted translating information from text to binary, which is the computer’s native language.
  • Header Compression. HTTP/2 uses HPACK compressions, which reduces overhead. Many headers were sent with the same values in every request in HTTP/1.1.

There are several demos out there where you can see the difference in action with tiled images. It appears that as the latency increases, the speed increase from HTTP/2 is even more noticeable, which is great for mobile users.

Who Supports HTTP/2?

According to Can I use, HTTP/2 is supported by 76.62 percent of the browsers used by users in the US and 67.89 percent globally. There are a couple of caveats to these numbers, as Internet Explorer 11 only supports HTTP/2 in Windows 10, and Chrome, Firefox and Opera only support HTTP/2 over HTTPS.

You can check how this will affect your website visitors in Google Analytics by going to Audience > Technology > Browser & OS and comparing to the supported browsers.

You’ll also find that most major server software — such as Apache, NGINX, and IIS — already supports HTTP/2. Many of the major CDNs have also added HTTP/2 support, including MaxCDN and Akamai.

HTTPS With HTTP/2

While HTTP/2 supports both secure and non-secure connections, both Mozilla Firefox and Google Chrome will only support HTTP/2 over HTTPS. Unfortunately, this means that many sites that want to take advantage of HTTP/2 will need to be served over HTTPS.

Fortunately, there are new initiatives such as Let’s Encrypt, which goes into public beta on December 3, 2015. Let’s Encrypt is a new certificate authority that is providing free security certificates for websites. It’s a great initiative towards a more secure web.

Improvements For Users With HTTP/2

Speed, speed, and more speed, providing for a better user experience. As time goes on, and people learn the limits of the new protocols, users should see increased speeds on HTTP/2 connections.

What HTTP/2 Means For Developers

With HTTP/1.1, many techniques were used to speed up websites that are no longer necessary with HTTP/2. These optimizations used to take additional development time and were made to cover up inherent flaws in speed and file loading, but they also caused additional issues at times.

  • Domain Sharding. Loading files from multiple subdomains so that more connections may be established. The increase in parallel file transfers adds to server connection overhead.
  • Image Sprites. Combining image files to reduce requests. The file must be loaded before any image from the file can be shown, and the large image file ties up RAM.
  • Combining Files. CSS and JavaScript files are often combined to reduce the number of requests. This makes the user wait for files before any of it can run and consumes additional RAM.
  • Inlining. CSS and JavaScript code, or even images, are placed directly into the HTML, reducing connections but using additional RAM and delays page rendering until the HTML is finished downloading.
  • Cookieless Domains. Static resources like images, CSS and JavaScript files don’t require cookies, so many developers started sending these from a cookieless domain to save bandwidth and time. With HTTP/2, the headers (including cookies) are compressed, so the sizes of the requests are very small in comparison with HTTP/1.1.

For my fellow geeks out there dealing with REST APIs, you will no longer have to batch requests.

Improvements For Servers With HTTP/2

Many of the techniques mentioned above by developers placed additional strain on servers due to extra connections opened by browsers. These connection-related techniques are no longer necessary with HTTP/2. The result is lower bandwidth requirements, less network overhead and lower server memory usage.

On mobile phones, multiple TCP connections could cause issues with the mobile network, causing them to drop packets and resubmit requests. The additional requests just added to the server load.

HTTP/2 itself brings benefits for a server, as well. Fewer TCP connections are necessary, as stated above. HTTP/2 is easier to parse, more compact and less error-prone.

What HTTP/2 Means For SEOs

With GoogleBot adding support for HTTP/2, websites that support the protocol will likely see an additional rankings boost from speed. On top of that, with Chrome and Firefox only supporting HTTP/2 over HTTPS, many websites that have not yet upgraded to HTTPS may see an additional boost in rankings when they do.

I make this last statement with the caveat that many technical items have to be done correctly with HTTPS, or you will likely experience at least a temporary, if not permanent, drop when making the switch from HTTP.

The number one problem I see with sites switching to HTTPS is with redirects — not just 302s instead of 301s, but placement or writing of the redirects, additional hops or chains in the redirects and failing to clean up old redirects. There are many additional items that need to be cleaned up, such as internal links, external links where possible, mixed content, duplication issues, canonical tags, sitemaps, many tracking systems that need to be changed and more.

Let’s not forget what Gary Illyes said:

//platform.twitter.com/widgets.js

There are other reasons besides Google ranking signals that your website should be secure. One most people don’t realize is that when switching from a site using security to one without, the referral data in the headers is dropped.

In Google Analytics, this typically means that more traffic is attributed to direct, when it should actually be attributed to referring websites. HTTPS also prevents ads from being injected on your website, as AT&T was recently found doing with their free Wi-Fi hotspots.

We’ve all seen studies on how slow websites affect conversions and cause users to abandon a website, and conversely how site speed increases lead to increased sales and conversion rates. The important thing to note is that HTTP/2 is faster and provides a better user experience.

Google made speed a ranking factor for a reason, and it will be interesting to see if HTTP/2 itself becomes a ranking factor and how much additional weight will be placed on the added speed.

SEOs, developers, server admins, sales teams and pretty much everyone else should be getting the ball rolling with implementing HTTP/2. There is no downside to upgrading, since if a user cannot load the site over HTTP/2, they will load it just like they always have. Shout from the rooftops with me, or on Twitter:

“Everyone should be making the move to #http2!”

A final note, and an interesting thought from a conversation I had recently with Bill Hartzer at Internet Summit, is that Google may be pushing for HTTPS and only supporting HTTP/2 over HTTPS in Chrome because this will actually eliminate some of the competition from competing ad networks.

Bill said he couldn’t take credit for this idea, but it does make sense. A lot of the smaller networks don’t support HTTPS, so by recommending HTTPS and only supporting HTTP/2 over HTTPS, they are likely gaining more market share in the ad space.

The post Why Everyone Should Be Moving To HTTP/2 appeared first on Search Engine Land.

Unwrap Your Holiday Reputation Management Action Plan by @jeanmariedion

A reputation management plan for the holidays means more than stocked shelves and full tills. You’ll also need a solid plan that can help you spot and solve attacks, so you can give your boss the gift of a solid reputation at the end of the year.

The post Unwrap Your Holiday Reputation Management Action Plan by @jeanmariedion appeared first on Search Engine Journal.

Thanksgiving Google Doodle Features “Three Sisters” Of North American Crops: Corn, Beans & Squash

Google thanksgiving logo 2015

Today’s Thanksgiving Day Google logo is based on the “three sisters” of North American agriculture: corn, beans and squash, and was created by guest Doodler Julia Cone using a papercraft technique.

“In the end, I hope that viewers will enjoy the craft of cut paper as an art from in a digital space,” says Cone on the Google Doodle Blog.

The colorful logo marking today’s holiday leads to a search for “Thanksgiving” and includes “Happy Thanksgiving 2015” sharing icons for Twitter, Facebook, Google+ or email.

Google offered a quick agricultural history lesson on its Doodle blog, explaining the origination of corn, beans and squash crops.

This planting technique, combining the three crops, originated in Haudenosaunee (Iroquois) villages, and was commonly used at the time of the European settlements in the early 1600s. This indigenous practice revolutionized horticulture and helped stave off starvation in many areas, including the Old World.

Here are a selection of Cone’s original sketches that led to the final Doodle used on Google’s U.S. homepage:

 

Google thanksgiving doodle sketches

 

Search Engine Land wishes all of its readers a happy Thanksgiving!

The post Thanksgiving Google Doodle Features “Three Sisters” Of North American Crops: Corn, Beans & Squash appeared first on Search Engine Land.

How Trust & Unique Identification Impact Semantic Search

answer-questions-knowledge-ss-1920

There are many factors that are key to the notion of semantic search. Two that are critical to understand that have not been written about much from an SEO point of view are trust and unique identification.

These two factors lie at the core of the many changes we see happening in search today, alongside Google’s ever-growing knowledge graph and their move in the direction of semantic search.

The Semantic Web Stack

The notion of trust is a key component in the semantic web. Below is an illustration that depicts the semantic web stack, with trust sitting at the top.

Semantic Web Stack

Semantic Web Stack (Source: https://en.wikipedia.org/wiki/Semantic_Web_Stack)

Trust is achieved through ascertaining the reliability of data sources and using formal logic when deriving new information. Computers leverage or mimic this factor in human behavior in order to derive algorithms that provide relevant search results to users.

Search Result Ranking Based On Trust

Search Result Ranking Based on Trust is, in fact, the name of a Google patent filed in September 2012. The patent describes how trust factors can be incorporated into creating a “trust rank,” which can subsequently be used to alters search result rankings in some fashion.

People tend to trust information from entities they trust, so displaying search results to users from entities they trust makes a lot of sense (and also brings in a personalization component).

A group at Google recently wrote a paper titled, Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources. The paper discusses the use of a trustworthiness score — Knowledge-Based Trust (KBT) — which is computed based on factors they describe therein.

Below, I have extracted some salient features from the paper that I believe are worth understanding from an SEO POV:

We propose using Knowledge-Based Trust (KBT) to estimate source trustworthiness as follows. We extract a plurality of facts from many pages using information extraction techniques. We then jointly estimate the correctness of these facts and the accuracy of the sources using inference in a probabilistic model. Inference is an iterative process, since we believe a source is accurate if its facts are correct, and we believe the facts are correct if they are extracted from an accurate source.

The fact extraction process we use is based on the Knowledge Vault (KV) project [10]. KV uses 16 different information extraction systems to extract (subject, predicate, object) knowledge triples from webpages. An example of such a triple is (Barack Obama, nationality, USA). A subject represents a real-world entity, identified by an ID such as mids in Freebase [2]; a predicate is predefined in Freebase, describing a particular attribute of an entity; an object can be an entity, a string, a numerical value, or a date.

I also most definitely enjoyed the introduction here:

Quality assessment for web sources is of tremendous importance in web search. It has been traditionally evaluated using exogenous signals such as hyperlinks and browsing history. However, such signals mostly capture how popular a webpage is. For example, the gossip websites listed in [16] mostly have high PageRank scores [4], but would not generally be considered reliable. Conversely, some less popular websites nevertheless have very accurate information.

What can be garnered from this is that SEO practitioners should ensure that all statements written on any website or blog are factual, as this will enhance trustworthiness (which may one day impact rankings).

When it comes to searching for entities in Google, it is clearly evident that they use some form of a trust-based mechanism. Users tend to trust online reviews, so reviews and review volumes are useful to users when they search for a specific product. As a case in point, a search for the product “La Roche Posay Vitamin C eyes” yields the following result in organic SERPs:

Google Search for

Google Search for “La Roche Posay Vitamin C eyes” — Organic Results

The only example that shows up without the enhanced displays associated with reviews (rich snippets) is a page that, when selected, does in fact contain reviews from an authoritative and trusted source (Amazon).

The “most trusted” result is given first, as that comes from the official website of the brand and the reviews page associated with that product. This is a pattern that seems to be quite prevalent in a large majority of product searches at this point in time.

I have written in the past about how another Google patent may utilize reviews in search results in such a manner, and I will quote a relevant portion of the referenced patent here:

The search system may use the awards and reviews to determine a ranking of the list, and may present the search results using that ranking.

Unique Identifiers In E-Commerce

In addition, I have also described in the past how unique identifiers may be leveraged to aggregate reviews by search engines.

Why is this important in the context of reviews in e-commerce?

If a search engine or e-commerce site cannot uniquely identify a product, multiple pages can be created for the same product. This causes those pages to effectively have diluted their “trust rank” and/or link equity in terms of impacting those signals they send to the search engines.

For example, you can see below, in the case of the marketplace eBay, that there are many cases where the same product is listed many times, and hence the reviews are not aggregated on one unique URL:

Search for results La Roche Posay Active C eyes ebay

Search for results “La Roche Posay Active C eyes ebay”

This means that it is critical for merchants to be able to uniquely disambiguate their products internally, if they want to send strong signals in order to rank in organic SERPs for a specific URL.

Ensuring your product listings are correctly and uniquely identified provides this benefit, as it will aggregate the reviews for that product onto the same page/product listing, thereby strengthening the “trust rank” of that page. It ought to have a corresponding effect in terms of avoiding link dilution for that page.

Until recently, this was also an issue on Amazon, but one that appears to have recently changed. Compare a recent product search on Amazon for the same product search a few weeks ago:

amazon-search-then

In this product search from several weeks ago, you can see many separate listings of the same product. [click to enlarge]

In a more recent search for the same product, only one listing appears. From that page, you can select other sellers to purchase from.

In a more recent search for the same product, only one listing appears. From that page, you can select other sellers to purchase from.

Amazon very recently altered this (a couple of weeks ago), and now only displays the one (correct) product at the top of their search results; however, this also appears to give a strong and exclusive bias to the brand owner of the product.

This is unfortunate as I now only seem to get one price (from the brand itself), and it is clearly not the best price. For me, it degrades the user experience, as I don’t seem to be able to get the best price or many options from other sellers (which is my understanding of the definition of a marketplace).

As local business are all entities and have associated products or services, the impact of trust clearly has an equivalent effect and plays a strong role here. An example is depicted below for a search for a specific product.

Search for

Search for “4 slice toaster oven near me”

It is also very well known that results from trusted review sites often dominate organic SERPs in local search today, with Yelp as a well-known example. This means this applies to professional services and all other kinds of local businesses, forming the basis for a large part of the user’s “trust” in that business entity and/or the products or services they offer.

Critic Reviews And Trust

Looking at this in another vein, Google recently started advising users to promote their content using Critic Reviews, and they suggest adding review markup to any of the following pages:

The page can be any of the following:

  • The website’s home page
  • A page linked from the home page that provides a list of available reviews
  • A page that displays an individual review.

They provide an example for promoting Critic Reviews for the movie “Gravity” and specify that the preferred format is JSON-LD (although they do state that they accept microdata and RDFA as alternative syntaxes). For examples of the microdata format, they recommend looking at schema.org/review.

Critic reviews – Sample markup in json-ld for the movie Gravity

Critic reviews – Sample markup in json-ld for the movie Gravity

Google in fact put out a terrific video on the topic of Critic Reviews. A snapshot below illustrates how the schema.org markup added for these reviews appears on your mobile device.

As Google clearly states here, these snippets help users make decisions and also introduces them to new, authoritative sources (whom they now presumably trust).

Crtitial Review Snippets on mobile

Crtitial Review Snippets on mobile

 

The standard set of attributes for Critic Reviews is well defined on the post, and there are also additional requirements for four specific Critic review types: movies, books, music albums and cars.

Promote Your Content With Structured Data

As an SEO, you should work to make your code “machine-friendly” and add relevant structured data to your pages using schema.org where applicable. As Google states very clearly here, doing so will give you a better chance of achieving enhanced displays (rich snippets) in search results, as well as a presence in the knowledge graph.

If you can, go one step further than specified in the blog by adding identifiers where possible. Focus primarily on Freebase and Wikidata IDs. I illustrated how to find a freebase ID in a previous article by locating the “MID” (Machine Identifier), and I also discussed how to facilitate the search engines disambiguating your content using the “sameAs” predicate in schema.org.

I would also recommend obtaining the wikidata identifier (or “QID”), which you can find quite easily on Wikipedia by going to the URL of the entity and then clicking “wikidata item” in the left-hand navigation.

wikidata-item

I would like to end this article with a screenshot from the video that I could not resist, as it makes a very clear statement. Structured Data allows Google to answer some really hard questions, and Google clearly loves the ability to do so. This means if you want to make Google happy, be sure to add structured data to your web pages.

Structured Data Lets Google answer some really hard questions

Structured Data Lets Google answer some really hard questions.

Takeaways

  • Mark up all your pages with relevant schema.org markup; if reviews apply, make doubly sure to mark them up, since they add a trust indicator.
  • Add identifiers wherever possible (MIDs and QIDs).
  • If you are running an e-commerce-type marketplace and are interested in “landing pages,” make sure you uniquely identify your products to ensure that your review volumes are maximized and that you do not lose link equity for those pages.
  • If you are a brand site, make sure to add reviews to your product page, along with your unique identifier, to ensure your appropriate recognition as the “official website,” typically in position 1 in organic SERPs (Other factors may alter this, of course).
  • If you are promoting some form of media that supports critical reviews (video, movies or music, or a product like cars), be sure to add markup to those pages.

The post How Trust & Unique Identification Impact Semantic Search appeared first on Search Engine Land.