{"id":2011,"date":"2017-11-23T13:55:48","date_gmt":"2017-11-23T11:55:48","guid":{"rendered":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/"},"modified":"2017-11-23T13:55:48","modified_gmt":"2017-11-23T11:55:48","slug":"semrush-ranking-factors-study-2017-methodology-demystified","status":"publish","type":"post","link":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/","title":{"rendered":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified"},"content":{"rendered":" <style>\r\n  .ui-tabs {display: table; }\r\n  .ui-tabs-nav {display: table;}\r\n \r\na.ui-tabs-anchor {\r\n\tfont-family: Tahoma;\r\n\tfont-size: 15px; \r\n\tcolor: #B52700;\r\n        margin: 5px 20px;\r\n}\r\n\r\ndiv.ui-tabs-panel {\r\n\tfont-family: Tahoma;\r\n\tfont-size: 14px;\r\n\tfont-weight: normal;\r\n\tcolor: #B35B22;\r\n}\r\n\r\n  <\/style><p>SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified<\/p>\n<p>In the second edition of the\u00a0<a href=\"https:\/\/www.semrush.com\/ranking-factors\/\">SEMrush Ranking Factors Study 2017<\/a> we\u2019ve added 5 more backlink-related factors and compared the strength of their influence on a particular URL vs. an entire domain. According to tradition, we offer you a deeper look at our methodology. \u00a0Back in June, when the first edition of the study was published, many brows were raised in disbelief \u2014 indeed, direct website visits are usually assumed to be the result of higher SERP positions, not vice versa. And yet site visits is exactly what our study confirmed to be the most important Google ranking factor among those we analyzed, both times. Moreover, the methodology we used was unique to the field of SEO studies \u2014 we traded correlation analysis for the Random Forest machine learning algorithm. As the ultimate goal of our study was to help SEOs prioritize tasks and do their jobs more effectively, we would like to reveal the behind-the-scenes details of our research\u00a0and bust some popular misconceptions, so that you can safely rely on our takeaways.<\/p>\n<p><span class=\"b-blog__image\"><a href=\"https:\/\/www.semrush.com\/ranking-factors\"><img loading=\"lazy\" alt=\"SEMrush Ranking Factors Study 2017\" height=\"1091\" src=\"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/e3\/db\/e3db7ec6144e38e5a3d8a65f5cde0df7\/resize\/531x1091\/backlink-ranking-factors-infographic.png\" width=\"531\" role=\"button\" \/><\/a><\/span><\/p>\n<p>Jokes aside, this post is for real nerds, so here is a short glossary:<\/p>\n<p><strong>Decision tree<\/strong> \u2014 a tree-like structure that represents a machine learning algorithm usually applied to classification tasks. It splits a training sample dataset into homogeneous groups\/subsets based on the most significant of all the attributes.<\/p>\n<p><strong>Supervised machine learning<\/strong> \u2014 a type of machine learning algorithm that trains a model to find patterns in the relationship between input variables (features, A) and output variable (target value, B): B = f(A). The goal of SML is to train this model on a sample of the data so that, when offered, the out-of-sample data the algorithm could be able to predict the target value precisely, based on the features set offered. The training dataset represents the teacher looking after the learning process. The training is considered successful and terminates when the algorithm achieves an acceptable performance quality.<\/p>\n<p><strong>Feature<\/strong> (<em>or attribute, or input variable<\/em>) \u2014 a characteristic of a separate data entry used in analysis. For our study and this blog post, features are the alleged ranking factors.<\/p>\n<p><strong>Binary classification<\/strong> \u2014 a type of classification tasks, that falls into supervised learning category. The goal of this task is to predict a target value (=class) for each data entry, and for binary classification, it can be either 1 or 0 only.<\/p>\n<h2>Using the Random Forest Algorithm For the Ranking Factors Study<\/h2>\n<p>The Random Forest algorithm was <a href=\"https:\/\/link.springer.com\/article\/10.1023%2FA%3A1010933404324\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">developed<\/a> by Leo Breiman and Adele Cutler in the mid-1990s. It hasn\u2019t undergone any major changes since then, which proves its high quality and universality: it is used for classification, regression, clustering, feature selection and other tasks.<\/p>\n<p>Although the Random Forest algorithm is not very well known to the general public, we picked it for a number of good reasons:<\/p>\n<ul>\n<li>\n<p>It is one of the most popular machine learning algorithms, that features <a href=\"https:\/\/www.stat.berkeley.edu\/~breiman\/RandomForests\/cc_home.htm\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">unexcelled accuracy<\/a>. Its first and foremost application is ranking the importance of variables (and its nature is perfect for this task \u2014 we\u2019ll cover this later in this post), so it seemed an obvious choice.<\/p>\n<\/li>\n<\/ul>\n<ul>\n<li>\n<p>The algorithm treats data in a certain way that minimizes errors:<\/p>\n<ol>\n<li>\n<p>The random subspace method offers each learner random samples of features, not all of them. This guarantees that the learner won\u2019t be overly focused on a pre-defined set of features and won\u2019t make biased decisions about an out-of-sample dataset.<\/p>\n<\/li>\n<li>\n<p>The bagging\u00a0or bootstrap aggregating method also improves precision. Its main point is offering learners not a whole dataset, but random samples of data.<\/p>\n<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<p>Given that we do not have a single decision tree, but rather a whole forest of hundreds of trees, we can be sure that each feature and each pair of domains will be analyzed approximately the same number of times. Therefore, the Random Forest method is stable and operates with minimum errors.<\/p>\n<h3>The Pairwise Approach: Pre-Processing Input Data<\/h3>\n<p>We have decided to base our study on a set of 600,000 keywords from the worldwide database (US, Spain, France, Italy, Germany and others), the URL position data for top 20 search results, and a list of alleged ranking factors. As we were not going to use correlation analysis, we had to conduct binary classification prior to applying the machine learning algorithm to it. This task was implemented with the Pairwise approach \u2014 one of the most popular machine-learned ranking methods used, among others, by Microsoft in its research projects.<\/p>\n<p>The Pairwise approach implies that instead of examining an entire dataset, each SERP is studied individually &#8211; we compare all possible pairs of URLs (the first result on the page with the fifth, the seventh result with the second, etc.) in regards to each feature. Each pair is assigned a set of absolute values, where each value is a quotient after dividing the feature value for the first URL by the feature value for the second URL. On top of that, each pair is also assigned a target value that indicates whether the first URL is positioned higher than the second one on the SERP (target value = 1) or lower (target value = 0).<\/p>\n<p><strong>Procedure outcomes:<\/strong><\/p>\n<ol>\n<li>Each URL pair receives a set of quotients for each feature and a target value of either 1 or 0. This variety of numbers will be used as a training dataset for the decision trees.<\/li>\n<li>We are now able to make statistical observations that certain features values and their combinations tend to result in a higher SERP position for a URL. This allows us to build a hypothesis about the importance of certain features and make a forecast about whether a certain set of feature values will lead to higher rankings.<\/li>\n<\/ol>\n<h3>Growing the Decision Tree Ensemble: Supervised Learning<\/h3>\n<p>The dataset we received after the previous step is absolutely universal and can be used for any machine learning algorithm. Our preferred choice was Random Forest, an ensemble of decision trees.<\/p>\n<p>Before the trees can make any reasonable decisions, they have to train \u2014\u00a0this is when the supervised machine learning takes place. To make sure the training is done correctly and unbiased decisions about the main data set are made, the bagging and random subspace methods are used.<\/p>\n<p><span class=\"b-blog__image\"><img loading=\"lazy\" alt=\"Using the Random Forest algorithm for the ranking factors study\" height=\"281\" src=\"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/83\/72\/8372541f7a90c0f0ab43a44083fc37b3\/resize\/500x281\/forest-gif.gif\" width=\"500\" role=\"button\" \/><\/span><\/p>\n<p><strong><a href=\"https:\/\/www.stat.berkeley.edu\/~breiman\/bagging.pdf\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">Bagging<\/a><\/strong> is the process of creating a training dataset by sampling with replacement. Let\u2019s say we have X lines of data. According to bagging principles, we are going to create a training dataset for each decision tree, and this set will have the same number of X lines. However, these sample sets will be populated randomly and with replacement \u2014 so it will include only approximately two-thirds of the original X lines, and there will be value duplicates. About one-third of the original values remain untouched and will be used once the learning is over.<\/p>\n<p>We\u00a0did the similar thing for the features using the <strong>random subspace method<\/strong> \u2014 the decision trees were trained on random samples of features instead of the entire feature set.<\/p>\n<p>Not a single tree uses the whole dataset and the whole list of features. But having a forest of multiple trees allows us to say\u00a0that every value and every feature are very likely to be used approximately the same amount of times.<\/p>\n<p><strong>Growing the Forest<\/strong><\/p>\n<p>Each decision tree repetitively partitions the training sample dataset based on the most important variable and does so until each subset consists of homogeneous data entries. The tree scans the whole training dataset and chooses the most important feature and its precise value, which becomes a kind of a pivot point (node) and splits the data into two groups. For the one group, the condition chosen above is true; for the other one \u2014 false (YES and NO branches). All final subgroups (node leaves) receive an average target value based on the target values of the URL pairs that were placed into a certain subgroup.<\/p>\n<p>Since the trees use the sample dataset to grow, they learn while growing. Their learning is considered successful and high-quality when a target percentage of correctly guessed target values is achieved.<\/p>\n<p>Once the whole ensemble of trees is grown and trained, the magic begins \u2014 the trees are now allowed to process the out-of-sample data (about one-third of the original dataset). A URL pair is offered to a tree only if it hasn\u2019t encountered the same pair during training. This means that a URL pair is not offered to 100 percent of the trees in the forest. Then, voting takes place: for each pair of URLs, a tree gives its verdict, aka the probability of one URL taking a higher position in the SERP compared to the second one. The same action is taken by all other trees that meet the \u2018haven\u2019t seen this URL pair before\u2019 requirement, and in the end, each URL pair gets a set of probability values. Then all the received probabilities are averaged. Now there is enough data for the next step.<\/p>\n<h3>Estimating Attribute Importance with Random Forest<\/h3>\n<p>Random Forest produces extremely credible results when it comes to attributing importance estimation. The assessment is conducted as follows:<\/p>\n<ol>\n<li>\n<p>The attribute values are mixed up across all URL pairs, and these updated sets\u00a0of values are offered to the algorithm.<\/p>\n<\/li>\n<li>\n<p>Any changes in the algorithm\u2019s quality or stability are measured (whether the percentage of correctly guessed target values remains the same or not).<\/p>\n<\/li>\n<li>\n<p>Then, based on the values received, conclusions can be made:<\/p>\n<\/li>\n<\/ol>\n<ul>\n<li>\n<p>If the algorithm\u2019s quality drops significantly, the attribute is important. Wherein the heavier is the slump in quality, the more important the attribute is. \u00a0<\/p>\n<\/li>\n<li>\n<p>If the algorithm\u2019s quality remains the same, then the attribute is of minor importance.<\/p>\n<\/li>\n<\/ul>\n<p>The procedure is repeated for all the attributes. As a result, a rating of the most important ranking factors is obtained.<\/p>\n<h2>Why We Think Correlation Analysis is Bad for Ranking Factors Studies<\/h2>\n<p>We intentionally abandoned the general practice of using correlation analysis, and\u00a0we have still received quite a few comments like \u201cCorrelation doesn\u2019t mean causation,\u201d \u201cThose don\u2019t look like ranking factors, but more like correlations.\u201d Therefore we feel this point deserves a separate paragraph.<\/p>\n<p>First and foremost, we would like to stress again that the initial dataset used for the study is a set of highly changeable values. Just to remind you that we examined not one, but 600,000 SERPs. Each SERP is characterized by its own average attribute value, and this uniqueness is completely disregarded in the process of correlation analysis. That being said, we believe that each SERP should be treated separately and with respect to its originality.<\/p>\n<p>Correlation analysis gives reliable results only when examining the relationship between two variables (for example, the\u00a0impact of the number of backlinks on a SERP position). \u201cDoes this particular factor influence position?\u201d \u2014 \u00a0this question can be answered quite precisely\u00a0since the only impacting variable is involved. But are we in a position to study each factor in isolation? Probably not, as we all know that there is a whole bunch of factors that influence a URL position in a SERP.<\/p>\n<p>Another quality criterion for correlation analysis is the variety of the received correlation ratios. For example, if there is a lineup of correlation ratios like (-1, 0.3 and 0.8), then it is pretty fair to say that there is one parameter that is more important than others. The closer the ratio\u2019s absolute value, or modulus, is to one, the stronger the correlation. If the ratio\u2019s modulus is under 0.3, such a correlation can be disregarded \u2014 the dependency between the two variables, in this case, is too weak to make any trustworthy conclusions. For all the factors we analyzed, the correlation ratio was under 0.3, so we had to shed this method.<\/p>\n<p>One more reason to dismiss this analysis method was the high sensitivity of the correlation value to outliers and noises, and the data for various keywords suggests a lot of them. If one extra data entry is added to the dataset, the correlation ratio changes immediately. Hence this metric can\u2019t be viable in the case of multiple variables, e.g. in a ranking factors study, and can even lead to incorrect deductions.<\/p>\n<p>Coming down to the final curtain, it is hard to believe that one or two factors with a correlation ratio modulus so close to one exist \u2014 if this were true, anyone could easily hack Google\u2019s algorithms, and we would all be in position 1!<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<p>Although we tried to answer most of the frequently raised questions above, here are some more for the more curious readers.<\/p>\n<h3>Where the study dataset comes from? Is it SEMrush data?<\/h3>\n<p>The traffic and user behavior data within our dataset is the anonymized <a href=\"https:\/\/en.wikipedia.org\/wiki\/Clickstream\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">clickstream<\/a> data that comes from third party data providers. The data is accumulated from the behavior of over 100 million real internet users, and over a hundred different apps and browser extensions are used to collect it.<\/p>\n<h3>Why\u00a0didn\u2019t we use artificial neural networks (ANNs)?<\/h3>\n<p>Although artificial neural networks are perfect for tasks with a large number of variables, e.g. image recognition (where each pixel is a variable), they produce results that are difficult to interpret and don\u2019t allow you to compare the weight of each factor. Besides, ANNs require a massive dataset and a huge number of features to produce reliable results, and the input data we had collected didn\u2019t match this description.<\/p>\n<p>Unlike Random Forest, where each decision tree votes independently and thus a high level of reliability is guaranteed, neural networks process data in one pot. There is nothing to indicate that using ANNs for this study would result in more accurate results.<\/p>\n<p>Our main requirements for a research method were stability and the ability to identify the importance of the factors. That being said, Random Forest was a perfect fit for our task, which is proven by numerous ranking tasks of a similar nature, also implemented with the help of this algorithm.<\/p>\n<h3>Why are website visits the most important Google ranking factor?<\/h3>\n<p>Hands down, this was probably the most controversial takeaway of our study. When we saw the results of our analysis, we were equally surprised. At the same time, our algorithm was trained on a solid scope of data, so we decided to double-check the facts. We excluded the organic and paid search data, as well as social and referral traffic, and taken into account only the direct traffic, and the results were pretty much the same \u2014 the position distribution remained unchanged (the graphs on pp. 40-41 of the study illustrate this point).<\/p>\n<p>To us, this finding makes perfect sense and confirms that Google prioritizes domains with more authority, as described in its <a href=\"https:\/\/static.googleusercontent.com\/media\/www.google.com\/en\/\/insidesearch\/howsearchworks\/assets\/searchqualityevaluatorguidelines.pdf\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">Search Quality Evaluator Guidelines<\/a>. Although it may seem that domain authority is just a lame excuse and a very vague and ephemeral concept, these guidelines dispel this myth completely. So, back in 2015 Google introduced this handbook to help estimate website quality and \u201creflect what Google thinks search users want.\u201d<\/p>\n<p>The handbook lists E-A-T, which stands for Expertise, Authoritativeness, and Trustworthiness, as an important webpage-quality indicator. Main content quality and amount, website information (i.e. who is responsible for the website), and website reputation all influence the E-A-T of a website. We suggest thinking of it in the following way: if a URL ranks in the top 10, by default, it contains content that is relevant to a user search query.<\/p>\n<p>But to distribute the places\u00a0between\u00a0these ten leaders, Google starts to count the additional parameters. We all know that there is a whole team of <a href=\"http:\/\/searchengineland.com\/library\/google\/google-search-quality-raters\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">search quality raters<\/a> behind the scenes, which is responsible for training the Google\u2019s search algorithms and improving search results&#8217; relevance. As advised by Google Quality Evaluator Guidelines, raters should give priority to the high-quality pages and teach the algos to do so as well. So, the ranking algorithm is trained to assign a higher position to pages that belong to trusted and highly authoritative domains, and we think this may be the reason behind the data we received for direct traffic and for its importance as a signal. For more information, check out our <a href=\"https:\/\/www.semrush.com\/blog\/eat-and-ymyl-new-google-search-guidelines-acronyms-of-quality-content\/\">EAT and YMYL: New Google Search Guidelines Acronyms of Quality Content<\/a> blog post.<\/p>\n<p><span class=\"b-blog__image zoom\"><img loading=\"lazy\" alt=\"Domain reputation and E-A-T \u2014 Google Search Quality Evaluator Guidelines\" height=\"295\" src=\"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/d2\/a1\/d2a1a333015164edf21a1a3f313b00f6\/resize\/885x295\/ranking-factors-study-methodology.png\" width=\"885\" role=\"button\" \/><\/span><\/p>\n<p>Here\u2019s more: at the recent SMX East conference, Google\u2019s Gary Illyes <a href=\"https:\/\/searchengineland.com\/gary-illyes-ask-anything-smx-east-285706\" rel=\"nofollow noopener noreferrer\" target=\"_blank\">confirmed<\/a> that \u2018how people perceive your site will affect your business.\u2019 And although this, according to Illyes, does not necessarily affect how Google ranks your site, it still seems important to invest in earning users\u2019 loyalty: happy users = happy Google.<\/p>\n<div>\n<blockquote class=\"twitter-tweet\">\n<p dir=\"ltr\" lang=\"en\" xml:lang=\"en\">The Google algorithm is like a human. It looks at brand sentiment and online reputation to understand your website better. <a href=\"https:\/\/twitter.com\/methode?ref_src=twsrc%5Etfw\" rel=\"noopener noreferrer\" target=\"_blank\">@methode<\/a> <a href=\"https:\/\/twitter.com\/hashtag\/SMX?src=hash&amp;ref_src=twsrc%5Etfw\" rel=\"noopener noreferrer\" target=\"_blank\">#SMX<\/a><\/p>\n<p>\u2014 Ari Finkelstein (@arifinkels) <a href=\"https:\/\/twitter.com\/arifinkels\/status\/923596797943209985?ref_src=twsrc%5Etfw\" rel=\"noopener noreferrer\" target=\"_blank\">October 26, 2017<\/a><\/p>\n<\/blockquote>\n<\/div>\n<p>What does this mean to you again? Well, brand awareness (estimated, among other things, by your number of direct website visits) strongly affects your rankings and deserves your putting effort into it on par with SEO.<\/p>\n<h3>Difference in Ranking Factors Impact\u00a0on a URL vs a Domain<\/h3>\n<p>As you may have spotted, every graph from our study shows a noticeable spike for the second position. We promised to have a closer look at this deviation and thus added a new dimension to our study. The second edition covers the impact of the three most important factors (direct website visits, time on site and the number of referring domains) on the rankings of a particular URL, rather than just the domain that it resides on.<\/p>\n<p>One would assume that the websites on the first position are the most optimized, and yet we saw that every trend line showed a drop on the first position.<\/p>\n<p>We connected this deviation with branded keyword search queries. A domain will probably take the first position in the SERP for any search query that contains its branded keywords. And despite how well a website is optimized, it will rank number one anyway, so it has nothing to do with SEO efforts. This explains why ranking factors affect a SERP\u2019s second position more than the first one.<\/p>\n<p>To prove this, we decided to look at our data from a new angle: we investigated how the ranking factors impact single URLs that appear on the SERP. \u00a0For each factor, we built separate graphs showing the distribution of URLs and domains across the first 10 SERP positions (please see pp. 50-54). Although the study includes graphs only for the top three most influential factors, the tendency that we discovered persists for other factors as well. \u00a0<\/p>\n<p>What does this mean to you as a marketer? When a domain is ranking for a branded keyword, many factors lose their influence. However when optimizing for non-branded keywords, keep in mind that the analyzed ranking factors have more influence on the positions of the particular URL than on the domain on which it resides. That means that the rankings of a specific page are more sensitive to on-page optimization, link-building efforts and other optimization techniques.<\/p>\n<h2>Conclusion: How to Use the SEMrush Ranking Factors Study<\/h2>\n<p>There is no guarantee that, if you improve your website\u2019s metrics for any of the above factors, your pages will start to rank higher. We conducted a very thorough study that allowed us to draw reliable conclusions about the importance of these 17 factors to ranking higher on Google SERPs. Yet, this is just a reverse-engineering job well done, not a universal action plan \u2014 and this is what each and every ranking factors study is about. No one but Google knows all the secrets. However, here is a workflow that we suggest for dealing with our research:<\/p>\n<ul>\n<li>\n<p><strong>Step 1<\/strong>. Understand which keywords you rank for \u2014 do they belong to low, medium or high search volume groups?<\/p>\n<\/li>\n<li>\n<p><strong>Step 2<\/strong>. Benchmark yourself against the competition: take a closer look at the methods they use to hit top 10 and at their metrics \u2014 Do they have a large scope of backlinks? Are their domains secured with HTTPS?<\/p>\n<\/li>\n<li>\n<p><strong>Step 3<\/strong>. Using this study, pick and start implementing the optimization techniques that will yield the best results based on your keywords and the competition level on SERPs.<\/p>\n<\/li>\n<\/ul>\n<p>Once again, we encourage you to take a closer look at our <a href=\"https:\/\/www.semrush.com\/ranking-factors\/\">study<\/a>, reconsider the E-A-T concept and get yourself a good, fact-based SEO strategy!<\/p>\n<p><aside class=\"b-ranking-factors-shortcode js-ranking-factor-shortcode\"><a href=\"https:\/\/www.semrush.com\/ranking-factors\/?utm_source=blog_en&amp;utm_medium=banner_footer&amp;utm_campaign=RF2\" class=\"b-ranking-factors-shortcode__link js-ranking-factor-shortcode__link\">&#013;<\/p>\n<div class=\"b-ranking-factors-shortcode__inner\">&#013;<\/p>\n<div class=\"b-ranking-factors-shortcode__cnt\">&#013;<\/p>\n<h5 class=\"b-ranking-factors-shortcode__title\">What makes your rankings go up when you&#8217;re done with the on-page SEO?<\/h5>\n<p>&#013;<\/p>\n<p class=\"b-ranking-factors-shortcode__txt\">Ranking Factors study 2.0 gives the answer<\/p>\n<p>&#013;<br \/>\n                <button class=\"b-ranking-factors-shortcode__btn -warning s-btn\">&#013;<br \/>\n                    <span class=\"s-btn__text\">Get PDF<\/span>&#013;<br \/>\n                <\/button>&#013;\n            <\/div>\n<p>&#013;\n        <\/p><\/div>\n<p>&#013;<br \/>\n    <\/a>&#013;<br \/>\n<\/aside>\n<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.semrush.com\/blog\/semrush-ranking-factors-study-2017-methodology-demystified\/\">Read more&#8230;&#8230;&gt;click Here&lt;<\/a><\/p>\n<script type='text\/javascript'>\r\n jQuery(document).ready(function() {\r\n    jQuery( \"#tabs_2011\" ).tabs({\r\n    collapsible: true,\r\n    active: false\r\n        });\r\n\tjQuery( \".scroller_2011\" ).width(jQuery( \".scroller_2011\" ).width()+1);\r\n\t\r\n\t\r\n\t\r\n  });\r\n  \r\n  <\/script>\r\n  ","protected":false},"excerpt":{"rendered":"<p>SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified In the second edition of the\u00a0SEMrush Ranking Factors Study 2017 we\u2019ve added 5 more backlink-related factors and compared the strength of their influence on a particular URL vs. an entire domain. According to tradition, we offer you a deeper look at our methodology. \u00a0Back in June, when &hellip; <a href=\"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v18.6 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified - test<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified - test\" \/>\n<meta property=\"og:description\" content=\"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified In the second edition of the\u00a0SEMrush Ranking Factors Study 2017 we\u2019ve added 5 more backlink-related factors and compared the strength of their influence on a particular URL vs. an entire domain. According to tradition, we offer you a deeper look at our methodology. \u00a0Back in June, when &hellip; Continue reading SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified\" \/>\n<meta property=\"og:url\" content=\"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/\" \/>\n<meta property=\"og:site_name\" content=\"test\" \/>\n<meta property=\"article:author\" content=\"ytuuitutut\" \/>\n<meta property=\"article:published_time\" content=\"2017-11-23T11:55:48+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/e3\/db\/e3db7ec6144e38e5a3d8a65f5cde0df7\/resize\/531x1091\/backlink-ranking-factors-infographic.png\" \/>\n<meta name=\"twitter:card\" content=\"summary\" \/>\n<meta name=\"twitter:creator\" content=\"@fdsdfsdf\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"nickisosnowski\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"17 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified - test","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/","og_locale":"en_US","og_type":"article","og_title":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified - test","og_description":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified In the second edition of the\u00a0SEMrush Ranking Factors Study 2017 we\u2019ve added 5 more backlink-related factors and compared the strength of their influence on a particular URL vs. an entire domain. According to tradition, we offer you a deeper look at our methodology. \u00a0Back in June, when &hellip; Continue reading SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified","og_url":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/","og_site_name":"test","article_author":"ytuuitutut","article_published_time":"2017-11-23T11:55:48+00:00","og_image":[{"url":"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/e3\/db\/e3db7ec6144e38e5a3d8a65f5cde0df7\/resize\/531x1091\/backlink-ranking-factors-infographic.png"}],"twitter_card":"summary","twitter_creator":"@fdsdfsdf","twitter_misc":{"Written by":"nickisosnowski","Est. reading time":"17 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebSite","@id":"http:\/\/147.91.204.66\/wordpress\/#website","url":"http:\/\/147.91.204.66\/wordpress\/","name":"test","description":"test","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"http:\/\/147.91.204.66\/wordpress\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"ImageObject","@id":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/#primaryimage","inLanguage":"en-US","url":"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/e3\/db\/e3db7ec6144e38e5a3d8a65f5cde0df7\/resize\/531x1091\/backlink-ranking-factors-infographic.png","contentUrl":"https:\/\/d30cz2g5jd7t8z.cloudfront.net\/media\/e3\/db\/e3db7ec6144e38e5a3d8a65f5cde0df7\/resize\/531x1091\/backlink-ranking-factors-infographic.png"},{"@type":"WebPage","@id":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/#webpage","url":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/","name":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified - test","isPartOf":{"@id":"http:\/\/147.91.204.66\/wordpress\/#website"},"primaryImageOfPage":{"@id":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/#primaryimage"},"datePublished":"2017-11-23T11:55:48+00:00","dateModified":"2017-11-23T11:55:48+00:00","author":{"@id":"http:\/\/147.91.204.66\/wordpress\/#\/schema\/person\/05346933e4e7e1a4b1b7ec131921d054"},"breadcrumb":{"@id":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/"]}]},{"@type":"BreadcrumbList","@id":"http:\/\/147.91.204.66\/wordpress\/semrush-ranking-factors-study-2017-methodology-demystified\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"http:\/\/147.91.204.66\/wordpress\/"},{"@type":"ListItem","position":2,"name":"SEMrush Ranking Factors Study 2017 \u2014 Methodology Demystified"}]},{"@type":"Person","@id":"http:\/\/147.91.204.66\/wordpress\/#\/schema\/person\/05346933e4e7e1a4b1b7ec131921d054","name":"nickisosnowski","image":{"@type":"ImageObject","@id":"http:\/\/147.91.204.66\/wordpress\/#personlogo","inLanguage":"en-US","url":"http:\/\/1.gravatar.com\/avatar\/7e3b073a75a374e458fb196d0a70fc13?s=96&d=mm&r=g","contentUrl":"http:\/\/1.gravatar.com\/avatar\/7e3b073a75a374e458fb196d0a70fc13?s=96&d=mm&r=g","caption":"nickisosnowski"},"description":"radio sam svuda","sameAs":["ytuuitutut","https:\/\/twitter.com\/fdsdfsdf"],"url":"http:\/\/147.91.204.66\/wordpress\/author\/nickisosnowski\/"}]}},"_links":{"self":[{"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/posts\/2011"}],"collection":[{"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/comments?post=2011"}],"version-history":[{"count":0,"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/posts\/2011\/revisions"}],"wp:attachment":[{"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/media?parent=2011"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/categories?post=2011"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/147.91.204.66\/wordpress\/wp-json\/wp\/v2\/tags?post=2011"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}