How to make a semantic core. A simple example of compiling a semantic core

The semantic core is the basis for website promotion on the Web. Without it, it will not be possible to bring the site to the top for a long time. We will tell you what it is made of, where to look for what and what tools to use for this.

What is a semantic core

To simplify understanding, let's proceed from the fact that the semantic core (SN) is all those words, phrases and their variations that fully describe the content of your site. The more accurate and better the core is assembled, the easier it is to promote the site.

Roughly speaking, this is one big long list of words and phrases (keys) that users use to search for similar products and services. There are no general recommendations on the size of the kernel, but there is one rule: the larger and better, the better. The main thing is not to artificially inflate the size to make the kernel bigger. If you chase size at the expense of quality, all the work will go down the drain - the kernel will not work.

Let's take an analogy. Imagine that you are the head of a large construction company who needs to build a lot of objects in a short time. You have an unlimited budget, but you need to hire at least a hundred people - this is the requirement of the union. What hundred people will you hire for such a responsible job - any at all, or will you carefully select, since the budget allows? But whoever you recruit, build houses with them. It is reasonable to assume that you will choose carefully, because the result depends on it.

It's the same with the kernel. For it to work even at the initial level, it would be nice if it had at least a hundred keys. And if you enter anything into the kernel, if only more, the result will be guaranteed to be a failure.

General rules for constructing a semantic core

One request - one page. You need to understand which one page the user should be sent to for each request. You can’t make it so that there are several pages per request: internal competition arises and the quality of promotion drops sharply.

The user receives predictable content on his request. If a client is looking for delivery methods in their region, do not send them to the main page of the site if it is not there. Sometimes it happens that after compiling the core, it becomes clear that you need to create new pages for search queries. This is normal and common practice.

The core contains all types of requests (HF, MF, LF). About the frequency - below. Just keep this rule in mind as you read further. Simply put, you should distribute these queries to specific pages on your site.

An example of a core distribution table for site pages.

Ways to collect the kernel

Wrong: copy from competitors

A way when there is no time and money, but the core needs to be somehow assembled. We find several of our direct competitors, the cooler the better, and then we use, for example, spywords.ru to get a list of keywords. We do this with everyone, combine requests, throw out duplicates - and we get a base from which we can somehow build on.

The disadvantages of this approach are obvious: it's not a fact that you need to move on the same queries; parsing and putting such a kernel in order can take a lot of time.

Sometimes it happens that even identical competitors have their own specifics in requests, which they take into account, but you do not. Or they focus on one thing, and you don’t do it at all - the keys work into the void and lower the rating.

On the other hand, it takes a lot of time, effort, and sometimes money to pay for such work in order to bring such a base into a normal form. When you start to calculate the economy (and this should always be done in marketing), you often realize that the costs of creating your core from scratch will be the same, or even less.

We do not recommend using this method, unless you have a complete disaster with the project and need to start somehow. Anyway, after the launch, you will have to redo almost everything, and the work will be useless.

That's right: make the semantic core from scratch

To do this, we fully study the site, understand what kind of audience we want to attract, with what problems, requirements and questions. We think about how they will search for us, correlate it with the target audience, adjust the goals, if necessary.

Such work takes a lot of time, it is unrealistic to do everything in a day. In our experience, the minimum time to build a core is a week, provided that a person will work full time only on this project. Remember that the semantic core is the foundation of promotion. The more accurately we compose it, the easier it will be at all other stages.

There is one danger that beginners forget about. The semantic core is not a thing that is done once and for all life. It is constantly being worked on, business, queries and keywords are changing. Something disappears, something becomes obsolete, and all this must be immediately reflected in the kernel. This does not mean that you can do it badly at first, since then you can finish it anyway. This means that the more accurate the kernel, the faster you can make changes to it.

Such work is initially expensive even within the company (if you do not order CL from an external company), because it requires qualifications, an understanding of how the search works, and complete immersion in the project. The core cannot be done in free time, it should become the main task of an employee or department.

Search frequency shows how often that word or phrase is searched per month. There are no formal criteria for division by frequency, it all depends on the industry and profile.

For example, the phrase "buy a phone on credit" has 7764 requests per month. For the phone market, this is a mid-frequency request. There is something that is asked much more often: "buy a phone" - more than a million requests, a high-frequency request. And there is something that is asked much less often: “buy a phone on credit via the Internet” - only 584 requests, low-frequency.

And the phrase "buy a drilling rig" has only 577 impressions, but is considered a high-frequency query. This is even less than the low-frequency request from our previous example. Why is that?

The fact is that the market for telephones and drilling rigs in a piece measurement differs thousands of times. And the number of potential customers differs in the same amount. Therefore, what is a lot for some is very little for others. You always need to look at the size of the market and know the approximate total number of potential customers in the region where you work.

Dividing requests by relative frequency per month

High frequency. They need to be entered into the meta tags of each page of the site in general, and used for general site promotion. It is extremely difficult to compete on high-frequency requests, it's easier to just be "in trend" - it's free. In any case, include them in the kernel.

Midrange. These are the same high-frequency ones, but formulated a little more precisely. They are not as fiercely competitive in the contextual advertising block as with HF, so they can already be used for promotion for money, if the budget allows. Such requests can already bring targeted traffic to your site.

Low frequency. The workhorse of promotion. It is low-frequency requests that give the bulk of traffic with proper configuration. Using them, you can freely advertise, optimize site pages for them, or even make new ones, if you can’t do without it. A good SA is about 3/4 of such requests and is constantly expanding due to them.

Super low frequency. The rarest, but the most specific requests, for example, "buy a phone at night in Tver on credit." Rarely anyone works with them when compiling, so there is practically no competition. They have a minus - they are really very rarely asked, and they take as much time as the rest. Therefore, it makes sense to deal with them when all the main work has already been done.

Types of requests depending on the purpose

Informational. They are used to learn something new or get information on a topic. For example: “how to choose a banquet hall” or “what kind of laptops are there”. All such requests should lead to information sections: blog, news, or collections by topic. If you see that a lot of information requests are being typed, and there is nothing to close them on the site, then this is a reason to make new sections, pages or articles.

Transactional. Transaction = action. Buy, sell, exchange, receive, deliver, order and so on. Most often, such requests are closed by pages of specific products or services. If most of your transactional questions are high or medium frequency, reduce the frequency and refine your queries. This will allow you to accurately direct people to the right pages, and not leave them on the main page without specifics.

Others. Requests without a clear intent or action. “Beautiful balls” or “sculpting clay crafts” - you can’t say specifically about them why the person asked this. Maybe he wants to buy it. Or learn technology. Or read more about how to do it. Or he needs someone to do it for him. Unclear. With such requests, you need to work carefully and carefully clean out the junk keys.

To promote a commercial site, you mainly need to use transactional queries, and avoid informational ones - the search engine shows information portals, Wikipedia, aggregator sites on them. And it is almost impossible to compete with them in terms of promotion.

trash keys

Sometimes queries include words or phrases that are not related to your industry or you just do not do it. For example, if you only make souvenirs from softwood, you probably don’t need the query “bamboo souvenirs”. It turns out that "bamboo" is a garbage element that clogs the core and interferes with the purity of the search.

We collect such keys in a separate list, they will be useful to us for contextual advertising. We indicate them as something that you don’t need to search for, and then our site will be in the search results for the query “pine souvenirs”, but not for the query “bamboo souvenirs”.

We do the same throughout the core - we find what does not belong to the profile, remove it from the CL and put it in a separate list.

Each request consists of three parts: a specifier, a body, and a tail.

The general principle is this: the body specifies the subject of the search, the specifier specifies what needs to be done with this subject, and the tail specifies the entire query.

By combining different specifiers and tails for queries, you can get many keywords that suit you and will be included in the core.

Step by step build of the kernel from scratch

The very first thing you can do is look through all the pages of your site and write out all the product names and set phrases of product groups. To do this, look at the headings of categories, sections and main characteristics. We will write everything in Excel, it will come in handy in the next steps.

For example, if we have a stationery store, we get the following:

Then we add characteristics to each request - we increase the "tail". To do this, we find out what properties these products have, what else can be said about them, and write them in a separate column:

After that, we add "specifiers": action verbs that are relevant to our topic. If, for example, you have a store, then it will be "buy", "order", "in stock" and so on.

We collect separate phrases from this in Excel:

Collecting extensions

Let's analyze three typical tools for collecting the kernel - two free ones and a paid one.

Free. We drive our phrase into it, we get a list of what looks like our request. We carefully look into it and choose what suits us. So we run everything that we got at the first stage. The work is long and boring.

As a result, you will have a semantic core that reflects the content of your site as accurately as possible. It is already possible to fully work with him further during the promotion.

When searching for words, focus on the region where you sell a product or service. If you do not work throughout Russia, switch to the “by region” mode (immediately below the search bar). This will allow you to get an accurate picture of the requests in the place you need.

Consider the history of requests. Demand is not static, which many people forget about. For example, if you search for the query “buy flowers” ​​at the end of January, it may seem that almost no one is interested in flowers - only a hundred or two requests. But if you search for the same in early March, the picture is completely different: thousands of users are looking for this. Therefore, remember about seasonality.

Also free, it helps to find and select keywords, predict queries and gives performance statistics.

key collector. The program is a real harvester that can do 90% of all the work of collecting the semantic core. But paid - almost 2000 rubles. Looks for keys from many sources, looks at ratings and requests, and collects analytics for the core.

The main features of the program:

collection of key phrases;

determination of the cost and value of phrases;

identifying relevant pages;

Everything she can do can be done for free using a few free analogues, but it will take many times longer. Automation is the forte of this program.

As a result, you get not only a semantic core, but also full analytics and recommendations for improvement.

Removing junk keys

Now we need to clean up our core to make it even more efficient. To do this, use the Key Collector (it will do this automatically), or look for garbage manually in Excel. At this stage, we need the list of unnecessary, harmful or redundant requests that we made earlier.

Junk and key removal can be automated

Grouping requests

Now, after the collection, all found requests need to be grouped. This is done so that keywords that are close to each other in meaning are attributed to one page, and not blurred into different ones.

To do this, we combine requests that are similar in meaning, the answers to which are given to us by the same page, and write next to where they belong. If there is no such page, but there are a lot of requests in the group, it most likely makes sense to create a new page or even a section on the site, where to send everyone on such requests.

An example of grouping, again, can be seen in our worksheet.

Use every automation program you can get your hands on. This saves a lot of time building the kernel.

Do not collect informational and transactional requests on one page.

The more low-frequency queries in the texts, the better. But do not get carried away, do not turn the text on the site into something understandable only by a robot. Remember that real people will read you too.

Do periodic cleaning and updating of the kernel. Make sure that the information in the semantic core is always up to date and reflects the current situation. Otherwise, you will spend money on something that you cannot ultimately give to your customers.

Remember the benefits. When chasing search traffic, don't forget that people come from different sources and stay where it's interesting. If you have an up-to-date core all the time and at the same time the text on the pages is written in a human, understandable and interesting language, you are doing everything right.

Finally, once again the kernel construction algorithm itself:

1. find all keywords and phrases

2. clean them from junk requests

3. we group queries by meaning and compare them with the pages of the site.

Do you want to promote the site, but you understand that it takes a long time to collect the semantic core? Or do you not want to understand all the nuances, but just get the result? Write to us, and we will select the best option for promoting your site for you.

At the moment, such factors as content and structure play the most important role for search promotion. However, how to understand what to write the text about, what sections and pages to create on the site? In addition to this, you need to find out exactly what the target visitor of your resource is interested in. To answer all these questions, you need to assemble a semantic core.

Semantic core- a list of words or phrases that fully reflect the theme of your site.

In the article I will tell you how to pick it up, clean it and break it into structure. The result will be a complete structure with requests clustered by pages.

Here is an example of a query engine broken down into a structure:


By clustering, I mean splitting your search queries into separate pages. This method will be relevant both for promotion in the PS of Yandex and Google. In the article, I will describe a completely free way to create a semantic core, but I will also show options with various paid services.

By reading this article, you will learn

  • Choose the right queries for your topic
  • Collect the most complete core of phrases
  • Clean from uninteresting requests
  • Group and create structure

Having collected the semantic core, you can

  • Create a meaningful structure on the site
  • Create layered menu
  • Fill pages with texts and write meta descriptions and titles on them
  • Collect positions of your site for queries from search engines

Collection and clustering of the semantic core

Proper compilation for Google and Yandex begins with determining the main key phrases of your subject. For example, I will demonstrate its compilation on a fictitious online clothing store. There are three ways to collect the semantic core:

  1. Manual. Using the Yandex Wordstat service, you enter your keywords and manually select the phrases you need. This method is fast enough if you need to collect keys for one page, however, there are two drawbacks.
    • The accuracy of the method is lame. You can always miss some important words if you use this method.
    • You will not be able to assemble a semantic core for a large online store, although you can use the Yandex Wordstat Assistant plugin to simplify it - this will not solve the problem.
  2. Semi-automatic. In this method, I'm going to use a program to build the kernel and then manually break it down into sections, subsections, pages, and so on. This method of compiling and clustering the semantic core, in my opinion, is the most effective. has a number of advantages:
    • Maximum coverage of all topics.
    • Quality breakdown
  3. Auto. Nowadays, there are several services that offer fully automatic kernel collection or clustering of your requests. Fully automatic option - I do not recommend for use, because. the quality of collection and clustering of the semantic core is currently quite low. Automatic query clustering - is gaining popularity and has a place to be, but you still need to combine some pages by hand, because. the system does not provide a perfect off-the-shelf solution. And in my opinion, you will just get confused and will not be able to dive into the project.

To compile and cluster a full-fledged correct semantic core for any project, in 90% of cases I use a semi-automatic method.

So, in order for us to do the following steps:

  1. Selection of queries for topics
  2. Collecting the kernel by request
  3. Purging of non-targeted requests
  4. Clustering (we break phrases into structure)

I showed an example of selecting a semantic core and grouping for a structure above. I remind you that we have an online clothing store, let's start to disassemble 1 point.

1. Selection of phrases for your subject

At this stage, we need the Yandex Wordstat tool, your competitors and logic. In this step, it is important to collect a list of phrases that are thematic high-frequency queries.

How to select queries to collect semantics from Yandex Wordstat

Go to the service, select the city (s) / region (s) you need, drive in the most “fat” requests in your opinion and look at the right column. There you will find the thematic words you need, both for other sections, and frequency synonyms for the entered phrase.

How to select queries before compiling a semantic core with the help of competitors

Enter the most popular queries in the search engine and select one of the most popular sites, many of which you most likely already know.

Pay attention to the main sections and save yourself the phrases you need.

At this stage, it is important to do it right: to cover all kinds of words from your subject as much as possible and not miss anything, then your semantic core will be as complete as possible.

Applicable to our example, we need to make a list of the following phrases/keywords:

  • clothing
  • Shoes
  • Boots
  • Dresses
  • T-shirts
  • Underwear
  • Shorts

What phrases to enter is meaningless: women's clothing, buy shoes, prom dresses, etc. Why?— These phrases are “tails” of the queries “clothing”, “shoes”, “dresses” and will be added to the semantic core automatically at the 2nd stage of collection. Those. you can add them, but that would be pointless double work.

What keys do you need to enter?"half boots", "boots" are not the same as "boots". It is the word form that is important, and not whether these words have the same root or not.

Someone will have a long list of key phrases, and for someone it consists of one word - do not be alarmed. For example, for an online store of doors, the word “doors” is quite possibly enough to compose the semantic core.

And so, at the end of this step, we should have a similar list.

2. Collection of queries for the semantic core

For the correct full-fledged collection, we need a program. I will show an example simultaneously on two programs:

  • On a paid one - KeyCollector. For those who have or want to buy.
  • On the free - Slovoeb. Free program for those who are not ready to spend money.

Opening the program

Create a new project and name it, for example, Mysite

Now, to further collect the semantic core, we need to do a few things:

Create a new Yandex mail account (it is not recommended to use the old one due to the fact that it can be banned for many requests). So, you have created an account, for example [email protected] with password super2018. Now you need to specify this account in the settings as ivan.ivanov:super2018 and click the "save changes" button below. More details in the screenshots.

We select a region for compiling a semantic core. You need to select only those regions in which you are going to advance and click save. The frequency of requests and whether they will fall into the collection in principle will depend on this.

All settings are completed, it remains to add our list of key phrases prepared at the first step and click the "start collecting" button of the semantic core.

The process is fully automatic and quite long. You can make coffee for now, and if the topic is wide, for example, like the one we collect, then it’s for a few hours 😉

As soon as all the phrases are collected, you will see something like this:

And at this stage is over - proceed to the next step

3. Cleaning the semantic core

First, we need to remove requests that are not of interest to us (non-targeted):

  • Associated with another brand, such as "gloria jeans", "ekko"
  • Information queries, e.g. "I wear boots", "Jean size"
  • Similar topics, but not related to your business, for example, “used clothes”, “wholesale clothes”
  • Requests that are not related to the topic in any way, for example, “sims dresses”, “puss in boots” (there are quite a lot of such requests after selection in the semantic core)
  • Requests from other regions, metro, districts, streets (it doesn’t matter which region you collected requests for - another region still comes across)

Cleaning must be done manually as follows:

We enter the word, press "Enter", if in our created semantic core it finds exactly the phrases that we need, select the found one and press delete.

I recommend that you enter the word not in its entirety, but using a construction without prepositions and endings, i.e. if we write the word "glory", it will find the phrases "buy jeans at gloria" and "buy jeans at gloria". When writing "gloria" - "gloria" would not be found.

Thus, you need to go through all the points and remove queries that you do not need from the semantic core. This can take a significant amount of time, and you may end up removing most of the collected requests, but the result will be a complete clean and correct list of all possible promoted requests for your site.

Upload now all your queries to excel

You can also bulk remove non-target queries from the semantics, provided you have a list. You can do this with stop words, and it's easy to do for a typical group of words with cities, subways, streets. You can download a list of such words that I use at the bottom of the page.

4. Clustering the semantic core

This is the most important and interesting part - we need to divide our requests into pages and sections, which together will create the structure of your site. A bit of theory - how to guide when splitting requests:

  • Competitors. You can pay attention to how the semantic core of your competitors from the TOP is clustered and do the same, at least with the main sections. And also see which pages are in the search results for low-frequency queries. For example, if you're not sure "do or don't" a separate section for "red leather skirts", then type the phrase into a search engine and see the results. If the search results contain resources where there are such sections, then it makes sense to make a separate page.
  • Logics. Do the whole grouping of the semantic core using logic: the structure should be understandable and represent in your head a structured tree of pages with categories and subcategories.

And a couple more tips:

  • It is not recommended to put less than 3 queries per page.
  • Do not make too many levels of nesting, try to make sure that there are 3-4 of them (site.ru/category/subcategory/sub-subcategory)
  • Do not make long URLs, if you have many levels of nesting when clustering the semantic core, try to shorten the url of categories high in the hierarchy, i.e. instead of "your-site.ru/zhenskaya-odezhda/palto-dlya-zhenshin/krasnoe-palto" do "your-site.ru/zhenshinam/palto/krasnoe"

Now to practice

Kernel Clustering by Example

To begin with, we will divide all requests into main categories. Looking at the logic of competitors, the main categories for a clothing store will be: men's clothing, women's clothing, children's clothing, as well as a bunch of other categories that are not tied to gender / age, such as just “shoes”, “outerwear”.

We group the semantic core with the help of Excel. Open our file and act:

  1. Divided into main sections
  2. We take one section and break it into subsections

I will show on the example of one section - men's clothing and its subsections. In order to separate some keys from others, you need to select the entire sheet and click conditional formatting-> cell selection rules-> text contains

Now in the window that opens, write "husband" and press enter.

Now all of our menswear keys are highlighted. It is enough to use the filter to separate the selected keys from the rest of our assembled semantic core.

So let's turn on the filter: you need to select the column with queries and click sort and filter-> filter

And now let's sort

Create a separate sheet. Cut the highlighted lines and paste them there. In this way, you will need to further break the kernel.

Change the name of this sheet to "Men's Clothing", a sheet where the rest of the semantic core is called "All Queries". Then create another sheet, name it "Structure" and put it as the very first one. On the page with the structure, create a tree. You should end up like this:

Now we need to divide the large menswear section into sub-sections and sub-subsections.

For ease of use and navigation through your clustered semantic core, put links from the structure to the corresponding sheets. To do this, right-click on the desired item in the structure and do as in the screenshot.

And now you need to methodically separate requests with your hands, simultaneously deleting what you might not have been able to notice and remove at the stage of cleaning the kernel. Ultimately, thanks to semantic core clustering, you should end up with a structure similar to this one:

So. What we have learned to do:

  • Select the queries we need to collect the semantic core
  • Collect all-possible phrases for these queries
  • Clean up "garbage"
  • Cluster and create structure

What you can do next by creating such a clustered semantic core:

  • Create a website structure
  • Create a menu
  • Write texts, meta descriptions, titles
  • Collect positions to track the dynamics of requests

Now a little about programs and services

Programs for collecting the semantic core

Here I will describe not only programs, but also plug-ins and online services that I use.

  • Yandex Wordstat Assistant is a plugin that makes it convenient to select queries from wordstat. Great for quickly compiling a core for a small site or 1 page.
  • Keycollector (slovoeb - free version) is a full-fledged program for clustering and creating a semantic core. Enjoys great popularity. A huge amount of functionality in addition to the main direction: Selection of keys from a bunch of other systems, the possibility of autoclustering, collecting positions in Yandex and Google, and much more.
  • Just-magic is a multifunctional online service for compiling the core, auto splitting, checking the quality of texts and other functions. The service is shareware, for full-fledged work you need to pay a monthly fee.

Thanks for reading the article. Thanks to this step-by-step manual, you will be able to compose the semantic core of your site for promotion in Yandex and Google. If you have any questions - ask in the comments. Below are bonuses.

Many web editions and publications talk about the importance of the semantic core.

There are similar texts on our Chto Delat website. In this case, only the general theoretical part of the issue is often mentioned, while the practice remains unclear.

All experienced webmasters say that you need to form the basis for promotion, but only a few explain how to use it in practice. To remove the veil of secrecy from this issue, we decided to highlight the practical side of using the semantic core.

Why do we need a semantic core

This is, first of all, the basis and plan for further filling and promotion of the site. The semantic basis, divided by the structure of the web resource, is the pointers on the way to the systematic and purposeful development of the site.

If you have such a basis, you do not have to think about the topic of each next article, you just need to follow the list items. With the core, site promotion moves much faster. And the promotion acquires clarity and transparency.

How to use the semantic core in practice

To begin with, it is worth understanding how the semantic basis is generally compiled. In fact, this is a list of key phrases for your future project, supplemented by the frequency of each request.

It will not be difficult to collect such information using the Yandex Wordstat service:

http://wordstat.yandex.ru/

or any other special service or program. In this case, the procedure will be as follows ...

How to make a semantic core in practice

1. Collect in a single file (Exel, Notepad, Word) all queries on your key topic taken from statistics data. Also include phrases “from the head”, that is, logically valid phrases, morphological options (as you yourself would search for your topic), and even options with typos!

2. The list of semantic queries is sorted by frequency. From requests with the highest frequency to requests with a minimum of popularity.

3. From the semantic basis, all junk queries that do not correspond to the subject or direction of your site are removed and cleaned. For example, if you're teaching people about washing machines for free but not selling them, don't use words like:

  • "buy"
  • "wholesale"
  • "delivery"
  • "order"
  • "cheap"
  • "video" (if there are no videos on the site) ...

Meaning: Do not mislead users! Otherwise, your site will receive a huge number of bounces, which will affect its rankings. And this is important!

4. When the main list is cleared of unnecessary phrases and queries, includes a sufficient number of items, you can use the semantic core in practice.

IMPORTANT: a semantic list can never be considered completely ready and complete. In any subject, you will have to update and supplement the core with new phrases and queries, periodically tracking innovations and changes.

IMPORTANT: the number of articles on the future site will depend on the number of items in the list. Consequently, this will also affect the volume of the necessary content, the working time of the author of the articles, and the duration of filling the resource.

Overlaying the semantic core on the site structure

In order to get a sense out of the entire list received, you need to distribute requests (depending on frequency) according to the structure of the site. It is difficult to name specific figures here, since the scale and frequency difference can be quite significant for different projects.

If, for example, you take a request with a million frequency as a basis, even a phrase with 10,000 requests will seem to be average.

On the other hand, when your main request is 10,000 frequencies, the average frequency will be about 5,000 requests per month. Those. some relativity is taken into account:

"High - Mid - Low" or "High - Mid - Low"

But in any case (even visually) you need to divide the whole core into 3 categories:

  1. high-frequency requests (HF - short phrases with a maximum frequency);
  2. low-frequency queries (LF - rarely requested phrases and phrases with low frequency);
  3. mid-frequency requests (MF - all average requests that are in the middle of your list.

In the next step, 1 or more (maximum 3) requests for the main page are backed up. These phrases should be of the highest possible frequency. High-frequency speakers are placed on the main one!

Further, from the general logic of the semantic core, it is worth highlighting several main key phrases from which sections (categories) of the site will be created. Here you could also use tweeters with a lower frequency than the main one, or better - mid-range requests.

The remaining low-frequency phrases are sorted into categories (under the created sections and categories), turn into topics for future site publications. But it's easier to understand with an example.

EXAMPLE

A good example of using the semantic core in practice:

1. Main page (HF) - high-frequency request - "website promotion".

2. Section pages (SC) - "website promotion to order", "self-promotion", "website promotion with articles", "website promotion with links". Or simply (if adapted for the menu):

Section No. 1 - "to order"
Section number 2 - "on your own"
Section number 3 - "article promotion"
Section No. 4 - “link promotion”

All this is very similar to the data structure in your computer: logical drive (main) - folders (partitions) - files (articles).

3. Pages of articles and publications (NP) - “quick promotion of the site for free”, “promotion to order is cheap”, “how to promote the site with articles”, “promotion of the project on the Internet to order”, “inexpensive promotion of the site with links”, etc. .

In this list, you will have the largest number of various phrases and phrases, according to which you will have to create further site publications.

How to use a ready-made semantic core in practice

Using a query list is an internal content optimization. The secret is to optimize (adjust) each page of the web resource for the corresponding core item. That is, in fact, you take a key phrase and write the most relevant article and page for it. To assess the relevance, a special service will help you, available at the link:

In order to have at least some guidelines in your SEO work, it is better to first check the relevance of sites from the TOP results for specific queries.

For example, if you write text for the low-frequency phrase “inexpensive website promotion with links”, then first just enter it in the search and evaluate the TOP-5 sites in the search results using the relevance assessment service.

If the service showed that sites from the TOP-5 for the query "inexpensive website promotion with links" have relevance from 18% to 30%, then you need to focus on the same percentages. Even better is to create unique text with keywords and about 35-50% relevance. By slightly beating competitors at this stage, you will lay a good foundation for further promotion.

IMPORTANT: the use of the semantic core in practice implies that one phrase corresponds to one unique resource page. The maximum here is 2 requests per article.

The more fully the semantic core is revealed, the more informative your project will be. But if you are not ready for long-term work and thousands of new articles, you do not need to take on wide thematic niches. Even a narrow specialized area, 100% open, will bring more traffic than an unfinished large site.

For example, you could take as the basis of the site not the high-frequency key “site promotion” (where there is enormous competition), but a phrase with a lower frequency and narrower specialization - “article site promotion” or “link promotion”, but expand this topic to the maximum in all articles of the virtual platform! The effect will be higher.

Useful information for the future

Further use of your semantic core in practice will only consist in:

  • correct and update the list;
  • write optimized texts with high relevance and uniqueness;
  • publish articles on the site (1 request - 1 article);
  • increase the usefulness of the material (edit ready-made texts);
  • improve the quality of articles and the site as a whole, keep an eye on competitors;
  • mark in the kernel list those requests that have already been used;
  • supplement optimization with other internal and external factors (links, usability, design, usefulness, videos, online help tools).

Note: All of the above is a very simplified version of activities. In fact, on the basis of the core, sublevels, deep nested structures, and branches to forums, blogs, and chats can be created. But the principle will always be the same.

GIFT: a useful tool for collecting the core in the Mozilla Firefox browser -

The semantic core is a scary name that SEOs have come up with to refer to a fairly simple thing. We just need to select the key queries for which we will promote our site.

And in this article, I will show you how to properly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. Here, too, there are "secrets".

And before we move on to compiling the SA, let's look at what it is, and what we should eventually come to.

What is the semantic core in simple words

Oddly enough, but the semantic core is a regular excel file, in which the list contains key queries for which you (or your copywriter) will write articles for the site.

For example, here is how my semantic core looks like:

I have marked in green those key queries for which I have already written articles. Yellow - those for whom I am going to write articles in the near future. And colorless cells mean that these requests will come a little later.

For each key request, I have determined the frequency, competitiveness, and invented a "catchy" title. Here is approximately the same file you should get. Now my SL consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).

A little lower we will talk about what you should prepare for if you suddenly decide to order the collection of a semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of "keys". However, in SA it is not the quantity that matters, but the quality. And we will focus on this.

Why do we need a semantic core at all?

But really, why do we need this torment? You can, in the end, just write high-quality articles just like that, and attract an audience with this, right? Yes, you can write, but you can’t attract.

The main mistake of 90% of bloggers is just writing high-quality articles. I'm not kidding, they have really interesting and useful materials. But search engines don't know about it. They are not psychics, but just robots. Accordingly, they do not put your article in the TOP.

There is another subtle point here with the title. For example, you have a very high-quality article on the topic "How to do business in the" muzzle book ". There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the most high-quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.

Why quality articles fly out of the TOP

Imagine that your site was visited not by a robot, but by a live checker (assessor) from Yandex. He understood that you have the coolest article. And the hands put you in first place in the search results for the query "Community promotion on Facebook."

Do you know what will happen next? You will be out of there very soon. Because no one will click on your article, even in the first place. People enter the query "Community promotion on Facebook", and your headline is "How to do business in the" muzzle book ". Original, fresh, funny, but ... not on demand. People want to see exactly what they were looking for, not your creative.

Accordingly, your article will empty take a place in the TOP of the issue. And a living assessor, an ardent admirer of your work, can beg the authorities for as long as he likes to leave you at least in the TOP-10. But it won't help. All the first places will be occupied by empty, like husks from seeds, articles that were copied from each other by yesterday's schoolchildren.

But these articles will have the correct “relevant” title - “Community promotion on Facebook from scratch” ( step by step, 5 steps, from A to Z, free etc.) It's a shame? Still would. Well, fight against injustice. Let's make a competent semantic core so that your articles take the well-deserved first places.

Another reason to start compiling SA right now

There is one more thing that for some reason people don't think much about. You need to write articles often - at least every week, and preferably 2-3 times a week to get more traffic and faster.

Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they can’t force themselves”, “just laziness”. But in fact, the whole problem is precisely in the absence of a specific semantic core.

I entered one of my basic keys — “smm” into the search field, and Yandex immediately gave me a dozen hints about what else might be of interest to people who are interested in “smm”. I just have to copy these keys into a notebook. Then I will check each of them in the same way, and collect clues on them as well.

After the first stage of collecting CL, you should get a text document, which will contain 10-30 wide base keys, with which we will work further.

Step #2 - Parsing Basic Keys in SlovoEB

Of course, if you write an article for the query "webinar" or "smm", then a miracle will not happen. You will never be able to reach the TOP for such a broad query. We need to break the base key into many small queries on this topic. And we will do this with the help of a special program.

I use KeyCollector but it's paid. You can use a free analogue - the SlovoEB program. You can download it from the official site.

The most difficult thing in working with this program is to set it up correctly. How to properly set up and use Slovoeb I show. But in that article, I focus on the selection of keys for Yandex-Direct.

And here let's take a look at the features of using this program for compiling a semantic core for SEO step by step.

First we create a new project and name it according to the broad key you want to parse.

I usually give the project the same name as my base key so I don't get confused later. And yes, I will warn you against another mistake. Don't try to parse all base keys at the same time. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the "Worstat" button in the program interface, enter your base key, and click "Start collecting".

For example, let's parse the base key for my blog "contextual advertising".

After that, the process will start, and after a while the program will give us the result - up to 2000 key queries that contain "contextual advertising".

Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise you to draw any conclusions from these figures.

Step #3 - Gathering the exact frequency for the keys

Dirty frequency will not show us anything. If you focus on it, then do not be surprised later when your key for 1000 requests does not bring a single click per month.

We need to find the net frequency. And for this, we first select all the found keys with checkmarks, and then click on the Yandex Direct button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.

Now we have an objective picture - how many times what request was entered by Internet users over the past month. Now I propose to group all key queries by frequency, so that it would be more convenient to work with them.

To do this, click on the "filter" icon in the "Frequency "!" ”, and specify to filter out keys with the value “less than or equal to 10”.

Now the program will show you only those requests, the frequency of which is less than or equal to the value "10". You can delete these queries or copy them for the future to another group of keywords. Less than 10 is very low. Writing articles for these requests is a waste of time.

Now we need to choose those keywords that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of competition of the request.

Step #4 - Checking for Query Concurrency

All "keys" in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). And they can also be highly competitive (VC), medium competitive (SC) and low competitive (NC).

As a rule, HF requests are simultaneously VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to advance on it. But this is not always the case, there are happy exceptions.

The art of compiling a semantic core lies precisely in finding such queries that have a high frequency, and their level of competition is low. Manually determining the level of competition is very difficult.

You can focus on indicators such as the number of main pages in the TOP-10, the length and quality of texts. the level of trust and sites in the TOP of the issue on request. All of this will give you some idea of ​​how tough the competition for positions is for this particular request.

But I recommend that you use service Mutagen. It takes into account all the parameters that I mentioned above, plus a dozen more, which neither you nor I probably even heard of. After analysis, the service gives the exact value - what is the level of competition for this request.

Here I checked the request "setting up contextual advertising in google adwords". The mutagen showed us that this key has a concurrency of "more than 25" - this is the maximum value that it shows. And this request has only 11 views per month. So it doesn't suit us.

We can copy all the keys that we picked up in Slovoeb and do a mass check in Mutagen. After that, we will only have to look through the list and take those requests that have a lot of requests and a low level of competition.

Mutagen is a paid service. But you can do 10 checks per day for free. In addition, the cost of verification is very low. For all the time I have worked with him, I have not yet spent even 300 rubles.

By the way, at the expense of the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, at the expense of the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.

Step #5 - Collecting "tails" for the selected keys

As has been proven and verified many times, your site will receive the bulk of traffic not from the main keys, but from the so-called “tails”. This is when a person enters strange key queries into the search box, with a frequency of 1-2 per month, but there are a lot of such queries.

To see the "tail" - just go to Yandex and enter your chosen key query in the search bar. Here's what you'll see.

Now you just need to write out these additional words in a separate document, and use them in your article. At what it is not necessary to put them always next to the main key. Otherwise, search engines will see "re-optimization" and your articles will fall in the search results.

Just use them in different places in your article, and then you will receive additional traffic from them as well. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.

For example, we have a request - "Setting up contextual advertising". Here's how you can reformulate it:

  • Setup = set up, make, create, run, launch, enable, host…
  • Contextual advertising = context, direct, teaser, YAN, adwords, kms. direct, adwords…

You never know exactly how people will look for information. Add all these additional words to your semantic core and use when writing texts.

So, we collect a list of 100 - 150 keywords. If you are compiling a semantic core for the first time, then it may take you several weeks to complete it.

Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of CL to specialists who will do it better and faster? Yes, there are such specialists, but it is not always necessary to use their services.

Is it worth ordering SA from specialists?

By and large, specialists in compiling a semantic core will only take you steps 1 - 3 of our scheme. Sometimes, for a large additional fee, they will also take steps 4-5 - (collecting tails and checking the competition of requests).

After that, they will give you several thousand key queries with which you will need to work further.

And the question here is whether you are going to write articles yourself, or hire copywriters for this. If you want to focus on quality, not quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose those topics that you understand well enough to write a quality article.

And here the question arises - why then do we actually need specialists in SA? Agree, parsing the base key and collecting the exact frequencies (steps #1-3) is not at all difficult. It will take you literally half an hour.

The most difficult thing is to choose high-frequency requests that have low competition. And now, as it turns out, you need HF-NC, on which you can write a good article. This is exactly what will take you 99% of the time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When the services of SA specialists are useful

Another thing is if you initially plan to attract copywriters. Then you do not need to understand the subject of the request. Your copywriters will also not understand it. They will simply take a few articles on this topic and compile “their” text from them.

Such articles will be empty, miserable, almost useless. But there will be many. On your own, you can write a maximum of 2-3 quality articles per week. And the army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some kind of traffic.

In this case, yes, calmly hire SA specialists. Let them also draw up TK for copywriters at the same time. But you understand, it will also cost some money.

Summary

Let's go over the main ideas in the article again to consolidate the information.

  • The semantic core is just a list of keywords for which you will write articles on the site for promotion.
  • It is necessary to optimize the texts for the exact key queries, otherwise your even the highest quality articles will never reach the TOP.
  • SL is like a content plan for social networks. It helps you not to fall into a “creative block”, and always know exactly what you will write about tomorrow, the day after tomorrow and in a month.
  • To compile the semantic core, it is convenient to use the free Slovoeb program, you only need it.
  • Here are five steps in compiling CL: 1 - Selection of basic keys; 2 - Parsing basic keys; 3 - Collection of exact frequency for requests; 4 - Checking the competitiveness of keys; 5 - Collection of "tails".
  • If you want to write articles yourself, then it is better to make the semantic core yourself, for yourself. Specialists in the compilation of CL will not be able to help you here.
  • If you want to work on quantity and use copywriters to write articles, then it is entirely possible to involve delegating and compiling the semantic core. If only there was enough money for everything.

I hope this guide was helpful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the fastest way from zero to the first million on the Internet (squeezed from personal experience over 10 years =)

See you later!

Your Dmitry Novoselov

Hello, dear readers of the blog site. I want to make another call on the topic of "collecting the seed." First, as expected, and then a lot of practice, maybe a little clumsy in my performance. So, lyrics. I got tired of walking blindfolded in search of good luck a year after starting this blog. Yes, there were “lucky hits” (intuitive guessing of queries frequently asked to search engines) and there was some traffic from search engines, but I wanted to hit the target every time (at least to see it).

Then I wanted more - to automate the process of collecting requests and screening out “dummies”. For this reason, there was an experience with Keycollector (and his dissonant younger brother) and another article on the topic. Everything was great and even just great, until I realized that there is one very important point that remained essentially behind the scenes - scattering requests for articles.

Writing a separate article for a separate request is justified either in highly competitive topics or in highly profitable ones. For information sites, this is complete nonsense, and therefore queries have to be combined on one page. How? Intuitively, i.e. again blindly. But not all requests get along on one page and have at least a hypothetical chance to reach the Top.

Actually, today we will talk about automatic clustering of the semantic core using KeyAssort (breaking requests into pages, and for new sites also building a structure based on them, i.e. sections, categories). Well, we will once again go through the process of collecting requests for every fireman (including with new tools).

Which of the stages of collecting the semantic core is the most important?

In itself, the collection of queries (the basis of the semantic core) for a future or existing site is a rather interesting process (as anyone, of course) and can be implemented in several ways, the results of which can then be combined into one large list (by cleaning up duplicates, deleting pacifiers by stop words).

For example, you can manually start tormenting Wordstat, and in addition to this, connect the Keycollector (or its dissonant free version). However, it's all great when you are more or less familiar with the topic and know the keys you can rely on (collecting their derivatives and similar queries from the right column of Wordstat).

Otherwise (yes, and in any case it will not hurt), you can start with the “coarse grinding” tools. For example, Serpstat(nee Prodvigator), which allows you to literally "rob" your competitors for the keywords they use (see). There are other similar "robbing competitors" services (spywords, keys.so), but I "got stuck" with the former Prodvigator.

In the end, there is also a free Bukvaris, which allows you to start collecting requests very quickly. You can also order a private download from the monstrous Ahrefs database and again get the keys of your competitors. In general, it is worth considering everything that can bring at least a fraction of queries that are useful for future promotion, which then will not be so difficult to clean up and combine into one large (often even a huge list).

We will consider all this (in general terms, of course) a little lower, but at the end the main question always arises - what to do next. In fact, it’s scary even just to approach what we got as a result (having robbed a dozen or two competitors and scraping the bottom of the barrel with a Keycollector). The head may burst from trying to break all these queries (keywords) into separate pages of a future or existing site.

Which queries will successfully coexist on one page, and which ones should not even be tried to combine? A really difficult question, which I previously solved purely intuitively, because manually analyzing the issuance of Yandex (or Google) on the subject of “what about competitors” is manually poor, and automation options did not come across at hand. Well, for the time being. Nevertheless, such a tool “surfaced” and today it will be discussed in the final part of the article.

This is not an online service, but a software solution, the distribution of which can be downloaded on the main page of the official website (demo version).

Therefore, there are no restrictions on the number of processed requests - as much as you need, process as much (there are, however, nuances in collecting data). The paid version costs less than two thousand, which, for the tasks to be solved, can be said for nothing (IMHO).

But we’ll talk about the technical side of KeyAssort a little lower, but here I would like to say about myself principle that allows you to break up a list of keywords(practically of any length) into clusters, i.e. a set of keywords that can be successfully used on one page of the site (optimize text, headings and link mass for them - apply SEO magic).

Where can you get information from? Who will tell you what will “burn out” and what will not reliably work? Obviously, the search engine itself will be the best adviser (in our case, Yandex, as a storehouse of commercial queries). It is enough to look at the results of a large amount of data (for example, analyze the TOP 10) for all these queries (from the collected list of the future seed) and understand what your competitors managed to successfully combine on one page. If this trend repeats several times, then we can talk about a pattern, and on the basis of it, it is already possible to beat the keys into clusters.

KeyAssort allows you to set the "strictness" with which clusters will be formed in the settings (select keys that can be used on one page). For example, for commerce, it makes sense to tighten the selection requirements, because it is important to get a guaranteed result, albeit at the expense of slightly higher costs for writing texts for a larger number of clusters. For informational sites, on the contrary, you can make some concessions in order to get potentially more traffic with less effort (with a slightly higher risk of “non-burnout”). Let's talk about how to do it again.

But what if you already have a site with a bunch of articles, but you want to expand an existing seed and optimize existing articles for a larger number of keywords in order to get more traffic for a minimum of effort (slightly shift the emphasis of the keywords)? This program also gives an answer to this question - you can make those queries for which existing pages are already optimized, made marker ones, and around them KeyAssort will assemble a cluster with additional queries that are quite successfully promoted (on one page) by your competitors in the issuance. It's interesting how it goes...

How to collect a pool of requests on the topic you need?

Any semantic core begins, in fact, with the collection of a huge number of requests, most of which will be discarded. But the main thing is that at the initial stage, those “pearls” get into it, under which individual pages of your future or existing site will then be created and promoted. At this stage, probably, the most important thing is to collect as many more or less suitable requests as possible and not miss anything, and then it is easy to weed out the empty ones.

There is a fair question, what tools to use? There is one unequivocal and very correct answer - different. The bigger, the better. However, these very methods of collecting the semantic core should probably be listed and given general assessments and recommendations for their use.

  1. Yandex Wordstat and its analogues in other search engines - initially these tools were intended for those who place contextual advertising so that they can understand how popular certain phrases are with search engine users. Well, it is clear that SEOs also use these tools and very successfully. I can recommend taking a look at the article, as well as the article mentioned at the very beginning of this publication (it will be useful for beginners).

    Among the shortcomings of Vodstat, one can note:

    1. A monstrous amount of manual work (definitely requires automation and it will be discussed a little later), both in punching phrases based on the key, and in punching associative queries from the right column.
    2. Limiting the issuance of Wordstat (2000 queries and not a line more) can be a problem, because for some phrases (for example, “work”) this is extremely small and we lose sight of low-frequency, and sometimes even mid-frequency queries that can bring good traffic and income ( many people miss them). You have to "stretch your head a lot", or use alternative methods (for example, keyword databases, one of which we will consider below - and it's free!).
  2. KayCollector(and his free little brother Slovoeb) - a few years ago, the appearance of this program was simply a "salvation" for many network workers (and even now it is rather difficult to imagine working on a seed without a QC). Lyrics. I bought KK two or three years ago, but I used it for several months at the most, because the program is tied to hardware (computer stuffing), and I change it several times a year. In general, having a license for KK, I use SE - so that's what laziness brings to.

    You can read the details in the article "". Both programs will help you collect queries from both the right and left columns of Wordstat, as well as search suggestions for the key phrases you need. Hints are what drop out of the search bar when you start typing a query. Users often do not finish the set simply choose the most suitable option from this list. Seoshniks have figured this out and use such queries in optimization and even.

    QC and SE allow you to immediately type a very large pool of requests (although it may take a long time, or buying XML limits, but more on that below) and easily weed out dummies, for example, by checking the frequency of phrases in quotation marks (learn the materiel if you don’t understand about than speech - links at the beginning of the publication) or by setting a list of stop words (especially relevant for commerce). After that, the entire query pool can be easily exported to Excel for further work or for loading into KeyAssort (clusterer), which will be discussed below.

  3. SerpStat(and other similar services) - allows you to enter the URL of your site to get a list of your competitors for the issuance of Yandex and Google. And for each of these competitors, you can get a complete list of keywords for which they managed to break through and reach certain heights (get traffic from search engines). The pivot table will contain the frequency of the phrase, the place of the site on it in the Top and a bunch of other different useful and not very information.

    Not so long ago, I used almost the most expensive Serpstat tariff plan (but only for one month) and managed to save almost a gigabyte of various useful things in Excel during this time. I collected not only the keys of competitors, but also just query pools for the key phrases I was interested in, and also collected the seedlings of the most successful pages of my competitors, which, it seems to me, is also very important. One thing is bad - now I can’t find the time to come to grips with the processing of all this invaluable information. But it is possible that KeyAssort will still take away the numbness in front of the monstrous colossus of data that needs to be processed.

  4. bukvariks is a free database of keywords in its own software shell. The selection of keywords takes a fraction of a second (uploading to Excel minutes). I don’t remember how many million words there are, but the reviews about it (including mine) are just excellent, and most importantly, all this wealth is free! True, the distribution package of the program weighs 28 GB, and when unpacked, the database occupies more than 100 GB on the hard disk, but these are all trifles compared to the simplicity and speed of collecting the query pool.

    But not only the speed of seed collection is the main plus compared to Wordstat and KeyCollector. The main thing is that there are no restrictions on 2000 lines for each request, which means that no low frequencies and beyond low frequencies will escape us. Of course, the frequency can be clarified once again through the same QC and screening out using stop words, but Bukvariks performs the main task remarkably. True, sorting by columns does not work for him, but by saving the query pool in Excel, it will be possible to sort it as you please.

Probably, at least a few more “serious” request pool cathedral tools will be provided by you in the comments, and I will successfully borrow them ...

How to clear the collected search queries from "dummy" and "garbage"?

The list obtained as a result of the manipulations described above is likely to be very large (if not huge). Therefore, before loading it into the clusterer (for us it will be KeyAssort) it makes sense to clean it up a bit. To do this, the query pool, for example, can be unloaded to the keycollector and removed:

  1. Requests with too low frequency (I personally break through the frequency in quotes, but without exclamation marks). It is up to you to decide which threshold to choose, and in many respects it depends on the subject, competition and the type of resource for which the seed is going.
  2. For commercial queries, it makes sense to use a list of stop words (such as "free", "download", "abstract", as well as, for example, the names of cities, years, etc.) in order to remove from the seed in advance what is known will not bring target buyers to the site (weed out freeloaders who are looking for information, not goods, well, and residents of other regions, for example).
  3. Sometimes it makes sense to be guided by the indicator of competition for a given query in the issue when screening out. For example, at the request of “plastic windows” or “air conditioners”, you don’t even have to rock the boat - failure is guaranteed in advance and with a 100% guarantee.

Say that it is too simple in words, but difficult in practice. And here it is not. Why? But because one person I respect (Mikhail Shakin) did not spare the time and recorded a video with a detailed a description of how to clean up search queries in the Key Collector:

Thanks to him for this, because these questions are much easier and clearer to show than to describe in the article. In general, you can do it, because I believe in you ...

Setting up the KeyAssort seed clusterer for your site

Actually, the most interesting begins. Now all this huge list of keys will need to be somehow broken (scattered) on separate pages of your future or existing site (which you want to significantly improve in terms of traffic brought from search engines). I will not repeat myself and talk about the principles and complexity of this process, because then why did I write the first part of this article.

So our method is quite simple. We go to the official website of KeyAssort and download demo version to try the program for a tooth (the difference between the demo and the full version is the inability to unload, that is, to export the collected seed), and after that it will be possible to pay (1900 rubles is not enough, not enough according to modern realities). If you want to immediately start working on the kernel, which is called "on a clean copy", then it is better to choose the full version with the ability to export.

The KeyAssort program itself cannot collect keys (this, in fact, is not its prerogative), and therefore they will need to be loaded into it. This can be done in four ways - manually (it probably makes sense to resort to this method to add some keys already found after the main collection of keys), as well as three batch ways to import keys:

  1. in txt format - when you just need to import a list of keys (each on a separate line of the txt file and ).
  2. as well as two variants of the Excel format: with the parameters you need in the future, or with collected sites from the TOP10 for each key. The latter can speed up the clustering process, because the KeyAssort program does not have to parse the output itself to collect this data. However, URLs from the TOP10 must be fresh and accurate (such a version of the list can be obtained, for example, in the Keycollector).

Yes, what I'm telling you - it's better to see once:

In any case, first remember to create a new project in the same "File" menu, and only then will the import function become available:

Let's take a look at the program settings (there are very few of them), because for different types of sites, a different set of settings may be optimal. Open the "Service" tab - "Program settings" and you can immediately go to the tab "Clustering":

The most important thing here is, perhaps, choosing the type of clustering you need. The program can use two principles by which requests are combined into groups (clusters) - hard and soft.

  1. Hard - all requests that fall into one group (suitable for promotion on one page) must be combined on one page for the required number of competitors from the Top (this number is set in the "group strength" line).
  2. Soft - all requests that fall into the same group will partially occur on the same page for the required number of competitors and the Top (this number is also set in the "grouping strength" line).

There is a good picture that clearly illustrates all this:

If it is not clear, then never mind, because this is just an explanation of the principle, and what matters to us is not theory, but practice, which says that:

  1. Hard clustering is best used for commercial sites. This method gives high accuracy, due to which the probability of hitting the Top of queries combined on one page of the site will be higher (with the proper approach to optimizing the text and its promotion), although there will be fewer queries themselves in the cluster, which means there are more clusters themselves (more pages will have to be created and promote).
  2. Soft clustering makes sense to use for information sites, because the articles will be obtained with a high indicator of completeness (they will be able to answer a number of user requests that are similar in meaning), which is also taken into account in the ranking. And the pages themselves will be smaller.

Another important, in my opinion, setting is a checkmark in the box "Use Marker Phrases". Why might this be needed? Let's see.

Let's say that you already have a website, but the pages on it were optimized not for a query pool, but for one, or you consider this pool to be insufficiently large. At the same time, you wholeheartedly want to expand the seed not only by adding new pages, but also by improving existing ones (this is still easier in terms of implementation). So it is necessary for each such page to get the seed "to the full".

That's what this setting is for. After activating it, it will be possible to put a tick next to each phrase in your list of requests. You just have to find those main queries for which you have already optimized the existing pages of your site (one per page) and the KeyAssort program will build clusters around them. Actually, everything. More in this video:

Another important (for the correct operation of the program) setting lives on the tab "Data collection from Yandex XML". you can read in the article below. In short, SEOs constantly parse Yandex and Wordstat results, creating an excessive load on its capacity. For protection, captcha was introduced, and special access was developed via XML, where captcha will no longer come out and data will not be distorted by the keys being checked. True, the number of such checks per day will be strictly limited.

What determines the number of allocated limits? How Yandex evaluates your . You can follow this link (being in the same browser where you are authorized in Ya.Webmaster). For example, it looks like this for me:

There is also a graph of the distribution of limits by time of day below, which is also important. If you need to break through a lot of requests, and there are few limits, then this is not a problem. They can be purchased. Not Yandex, of course, directly, but those who have these limits, but they do not need them.

The Yandex XML mechanism allows the transfer of limits, and the exchanges that have become intermediaries help automate all this. For example, on XMLProxy you can buy limits for only 5 rubles per 1000 requests, which, you see, is not at all expensive.

But it doesn’t matter, because the limits you bought will still flow to your “account”, but in order to use them in KeyAssort, you will need to go to the " Setting" and copy the long link into the "URL for requests" field (do not forget to click on "Your current IP" and click on the "Save" button to bind the key to your computer):

After that, all that remains is to insert this URL into the window with the KeyAssort settings in the "Url for requests" field:

Actually, everything is finished with the KeyAssort settings - you can start clustering the semantic core.

Keyword clustering in KeyAssort

So, I hope that you have set everything up (selected the desired type of clustering, connected your own or purchased limits from Yandex XML), figured out how to import a list with queries, and successfully transferred the whole thing to KeyAssort. What's next? And then, of course, the most interesting thing is the launch of data collection (Urls of sites from the Top 10 for each request) and the subsequent clustering of the entire list based on this data and the settings you made.

So, to get started, click on the button "Collect Data" and wait from several minutes to several hours while the program scans the Tops for all requests from the list (the more there are, the longer the wait):

It took me about a minute to make three hundred requests (this is a small core for a series of articles about working on the Internet). After which you can already proceed directly to clustering, the button of the same name on the KeyAssort toolbar becomes available. This process is very fast, and literally in a few seconds I got a whole set of calsters (groups), designed as nested lists:

Learn more about using the program interface, as well as about creating clusters for existing site pages look better in the video, because it's much clearer:

Everything we wanted, we got, and mind you - on full automatic. Lepota.

Although, if you are creating a new site, then in addition to clustering, it is very important outline the future structure of the site(define sections/categories and distribute clusters among them for future pages). Oddly enough, but it is quite convenient to do this in KeyAssort, but the truth is no longer in automatic mode, but in manual mode. How?

Again, it will be easier to see once again - everything is set up literally before our eyes by simply dragging clusters from the left window of the program to the right one:

If you did buy the program, you can export the resulting semantic core (and in fact the structure of the future site) to Excel. Moreover, on the first tab it will be possible to work with requests in the form of a single list, and on the second tab the structure that you configured in KeyAssort will already be saved. Very, very convenient.

Well, whatever. I am ready to discuss and hear your opinion about the collection of seedlings for the site.

Good luck to you! See you soon on the blog pages site

You may be interested

Vpodskazke - a new service Vpodskazke for promoting suggestions in search engines SE Ranking is the best position monitoring service for SEO beginners and professionals Collection of a complete semantic core in Topvisor, a variety of ways to select keywords and group them by page The practice of collecting a semantic core for SEO from a professional - how it happens in the current realities of 2018 Optimization of behavioral factors without cheating them SEO PowerSuite - programs for internal (WebSite Auditor, Rank Tracker) and external (SEO SpyGlass, LinkAssistant) website optimization
SERPClick: promotion by behavioral factors



Loading...
Top