It increases relevancy: Siloing ensures all topically related content is connected, and this in turn drives up relevancy. For example, linking to each of the individual yoga class pages (e.g. Pilates, Yoga RX, etc) from the “Yoga classes” page helps confirm—to both visitors and Google—these pages are in fact different types of yoga classes. Google can then feel more confident ranking these pages for related terms, as it is clearer the pages are relevant to the search query.


I’m a big advocate of quality. It’s so much more important than frequency. In my experience, one great piece a month will get much better results than a so-so post once a week. But if you don’t want to reduce your frequency, you could try creating a format or series of posts that are easier to create, like interviews. This can fill in the weeks in between while you write something with more impact. Mix it up! 🙂

You have also mentioned Quuu for article sharing and driving traffic. I have been using Quuu for quite sometime now and I don’t think they’re worth it. While the content does get shared a lot, there are hardly any clicks to the site. Even the clicks that are there, average time is like 0.02 seconds compared to more than 2 minutes for other sources of traffic on my website. I have heard a few guys having a similar experience with Quuu and so, I thought should let you know.
People are searching for any manner of things directly related to your business. Beyond that, your prospects are also searching for all kinds of things that are only loosely related to your business. These represent even more opportunities to connect with those folks and help answer their questions, solve their problems, and become a trusted resource for them.

Hey Brian, I am a vivid reader of your blogs. Having said that, I would love to know your inputs on “How to select a topic for guest post?”. The reason why I am asking this is, For example, If my keyword is “Custom Software development Company” and if I do a guest post on “How AI is transforming Technology Industry”, It wouldnt work at all! I need your guidance on how to find topic adhering to the theme of the target keyword (I am trying to explore Co-Citation and Co-Occurence more)
SEMRush is primarily a search engine optimization tool, meaning you’d use it as a website owner to help find and target keywords that bring you more search engine traffic. However, as a regular web surfer, you can use it to see what kind of search traffic a site gets How Search Engines Work & Ways to Improve Your Search Results How Search Engines Work & Ways to Improve Your Search Results Tired of searching a bunch of times to find what you want? Here's how search engines actually work and what to do to make your searches faster and more accurate. Read More .
Many blogging software packages automatically nofollow user comments, but those that don't can most likely be manually edited to do this. This advice also goes for other areas of your site that may involve user-generated content, such as guest books, forums, shout-boards, referrer listings, etc. If you're willing to vouch for links added by third parties (for example, if a commenter is trusted on your site), then there's no need to use nofollow on links; however, linking to sites that Google considers spammy can affect the reputation of your own site. The Webmaster Help Center has more tips on avoiding comment spam39, for example by using CAPTCHAs and turning on comment moderation.
One of the things that can slow-bleed the traffic from your site is the quality (or lack thereof) of the content you publish on your site. Previous Google updates like Panda have already been released specifically to deal with the issue of low-quality content on websites. Long story short: Panda intended to stop sites with bad content from appearing in the search results.

Too much web traffic can dramatically slow down or prevent all access to a website. This is caused by more file requests going to the server than it can handle and may be an intentional attack on the site or simply caused by over-popularity. Large-scale websites with numerous servers can often cope with the traffic required, and it is more likely that smaller services are affected by traffic overload. Sudden traffic load may also hang your server or may result in a shutdown of your services.
Other nifty stats include search engine visibility, backlink counts, domain age, and top competitors. Also, remember that this tool is just an estimator so don’t take its website valuations at face value What's the Best Way to Calculate the Value of Your Website? What's the Best Way to Calculate the Value of Your Website? If you're running a website, you should know how much it's worth. Fortunately, it isn't too difficult to calculate. Read More .
Ask for a technical and search audit for your site to learn what they think needs to be done, why, and what the expected outcome should be. You'll probably have to pay for this. You will probably have to give them read-only access to your site on Search Console. (At this stage, don't grant them write access.) Your prospective SEO should be able to give you realistic estimates of improvement, and an estimate of the work involved. If they guarantee you that their changes will give you first place in search results, find someone else.
In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[32] On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[33] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[34]
Great post Ross but I have a question on scaling the work that goes into producing the Kob score: how do you recommend you go about getting the MOZ difficulty score – do you do it manually then VLOOKUP everything or some other way? My current membership at MOZ allows 750 searches a day for KW difficulty so this can be a limiting factor in this research. Would you agree?
So, Google has accepted the reconsideration request, you can now move forward with creating high-quality link building and a content creation strategy. I see every one creating threads about great content marketing examples, but the problem is that most of the time these are big business examples. SME’s and start-ups do not have big dollars to do such things, so the next best thing is to is to create a content market calendar for your clients. 
The first relates to internal link structure. I’ve made the mistake you say you’ve seen so often. I have a primary keyword and have used that keyword in the main navigation, linked to a page optimized for that keyword. But I’ve also got a bunch of contextual links in posts pointing to that page, usually with the keyword in the anchor text. I now understand that those internal links aren’t helping much, at least from an SEO perspective. Am I better to remove that keyword and direct link from the menu and simply link the page from multiple posts and pages within the site. Or will I get better results leaving it in the main menu and changing the contextual links in the posts to point to a related page with a different keyword?
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12]
The other forms is just social media. So, this is just social. This is using the platforms, such as Facebook, such as, you know, Facebook Groups, using Instagram, using YouTube, using social media in order to promote your business, your links, or whatever. So, this to me, is, it depends, the quality of the traffic that you get from social, really depends on the platform. Honestly, I’d say something like Twitter would not get you very high quality traffic. In fact, traffic from Twitter, in my experience, is usually very low quality, which means low conversions, and no sales, basically. So, you know, this isn’t a hard or fast rule, but basically, something like YouTube, on the other hands, is very high converting. So, basically, you got social, you got SEO, this is organic traffic. It’s free.
To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots (usually ). When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[46]
Hi, my name is Dimitrios and I am responsible for Crave Culinaire’s digital marketing. I would like to drive more traffic to Crave’s blog. Since Crave Culinaire is the only catering company who provides molecular cuisine, I thought about craving a blog post about that. The influencers in this niche have great success in utilizing recipes on their blogs. I will share some recipes of Brian Roland, owner and head chef of Crave Culinaire.

If you are serious about improving search traffic and are unfamiliar with SEO, we recommend reading this guide front-to-back. We've tried to make it as concise as possible and easy to understand. There's a printable PDF version for those who'd prefer, and dozens of linked-to resources on other sites and pages that are also worthy of your attention.
Second and last, keywords. I use SEO Ultimate on WordPress and I sometimes doubt about what words and sentences to put in the “Tags” box. Does it need to be very specific or broad words are good to have too ? Like if I’m writing about Porsche beating the previous record on the Nurburgring with the new GT2 RS, can I put “Nurburgring” and “Porsche GT2 RS” as tags or is better to just keep “Porsche record nurburgring” and specific tags like this one ?
You may not want certain pages of your site crawled because they might not be useful to users if found in a search engine's search results. If you do want to prevent search engines from crawling your pages, Google Search Console has a friendly robots.txt generator to help you create this file. Note that if your site uses subdomains and you wish to have certain pages not crawled on a particular subdomain, you'll have to create a separate robots.txt file for that subdomain. For more information on robots.txt, we suggest this Webmaster Help Center guide on using robots.txt files13.
WOW. I consider myself a total newbie to SEO, but I’ve been working on my Squarespace site for my small business for about 3 years and have read dozens of articles on how to improve SEO. So far, this has been the MOST USEFUL and information-packed resource I’ve found so far. I’m honestly shocked that this is free to access. I haven’t even completely consumed this content yet (I’ve bookmarked it to come back to!) but I’ve already made some significant changes to my SEO strategy, including adding a couple of infographics to blog posts, changing my internal and external linking habits, editing meta descriptions, and a bunch more. Thanks for all the time and passion you’ve out into this.

Hack #1: Hook readers in from the beginning. People have low attention spans. If you don’t have a compelling “hook” at the beginning of your blogs, people will click off in seconds. You can hook them in by teasing the benefits of the article (see the intro to this article for example!), telling a story, or stating a common problem that your audience faces.
Loved this article. I was in stuck mode. Am relatively new to blogging. Started online business and website about 3 months ago. Am learning a lot, but had an ah ha moment about traffic – targeted traffic and found this article. Thanks for the great tips. I hope to implement all of them, and realize how much more there is to each one of the steps you outline. Great stuff!
×