One of the tools that are frowned upon in the world of SEO is ScrapeBox . This is a tool that is classified as SPAM, but in my opinion it is not. This tool may or may not be spam, it all depends on the person using it, the end of the day, it is a tool that collects data (among many other features), and like any tool can be used to do “good” things or to make things “not so good”.
As you know if you’re in this world of SEO , these last few months have changed many things about link building made our website, now we need to focus more on the quality of each link in the amount and with the passage of months this will be even more important and each link to place can benefit or harm our Link Building strategy.
In the following book I’ll tell you to use ScrapeBox and reported benefits me.
Scrapebox is a tool for scrapear to automate tasks and create you Google your own footprints (If you do not know what the footprints here I leave a full article on them ). Many professionals think that Scrapebox is a tool to get massive links and see it as a SPAM tool, not culp0 them since many people use this type of software for black hat practices than ever recommend.
Scrapebox is not a spam tool, in fact the use Scrapebox in my day to day tasks White hat now throughout this guide you will see.
Wandering around the interface of this tool
Before you begin to list all the possibilities and gives you this tool in your day to day, I’m going to show its interface and briefly explain what each one of them:
Section Harvester: You can put the footprints you want to use to acquire blogs, domains, and other resources. You can also select the “Custom Footprint” option to further customize and get more out of this option.
Section Keywords: This is where the keyword is inserted on which we seek with the custom footprint.
For example, if we want to find blogs related to SEO and are functioning under wordpress.com just what we would do is put the following line: site: wordpress.com “SEO”. This would return all the results we found based ScrapBox this footprint and this keyword.
Harvested URLs: In this window you will find all the results you’ve got to get through your internship as I have discussed in section 2.
Select Engines & Proxies: Here is where you will select the search engine and insert the proxy you want to use to start scrapear information. If you will use this tool to detox issues bonds or competitive analysis I recommend also select search engines like Bing and Yahoo subsequently compared more information.
Coment Poster: This option allows you to post comments on sites that have previously obtained using the appropriate footprint, but this would not recommend you use it well, as it’s black hat and such practices would end up penalizing your project. You can use it to ping your links and get them indexed faster.
Start your Link Building strategy with option Keyword Scraper
Let’s start with the good, one of the features I like. This is the Keyword Scraper option that will help you to acquire numerous related keywords, which you can then use to promote to your list and get more information scrapear. This option will suggest a host of related keywords in the purest style suggestive keywords in Google.
To operate this wonderful option, simply select the keyword Scrape scrape >> Section 2 (Keywords), enter on the left side the keywords that you have designed and click the “scrape” button. Once this task you will see how many ScrapeBox will start to show you related keywords.
To avoid duplicate or unwanted keywords find a button called Remove , here you can remove duplication, keywords that do not contain the word you want and some very últiles to finish outlining keywods your list of options.
ScrapeBox. Complete Guide to Good Practice.Now we start with the complete guide to good practice with ScrapeBox . Once you’ve seen some of the most important features of this software, now you will see the practical benefits that we reported daily use of this tool.
1. Scrapea URLs from options ScrapeBox
Definitely one of the best scrapeadores you can find on the internet, but the best. So once you’ve got your list of selected keywords (remember to choose keywords that relate directly or indirectly to your niche with a volume of important searches), those introduced to in section keywords for which I have already spoken lines above and follow step 2.
2. Use the Footprints.
The footprints are commands that are inserted into google for more specific searches . To help you understand this better let’s see an example:
Imagine you want to find all URLs that Google has indexed in its search containing the keyword “SEO”, for it empelearemos a code to facilitate this search, in this case so specific. Then would read: “inurl: SEO”.
You see the footprints are very useful in day to day SEO and few people use them, I encourage you to start using it and make the most of your actions.
Here I leave all 3 search operators you should know:
Site: Displays lists of domain specific URLs or links to a specific domain.
Inurl: Displays the URLs that contain a specific text fragment previously defined.
Intitle: Sample URLs that contenien some text in the meta title tag previously defined.
As additional information here I leave some of the advanced search operators that Google recommends.
Now that you know who they are and what they are for search operators or footprints, sure you already know what to do if you have the following questions:
- I need to get links from Blogs and Bloggers related to my topic.
- I need to get my product and related websites where you can be mentioned links.
- Platforms need to know where I can share my products.
3. Find good sites for Guest Posting by ScrapeBox.
Suppose you have a blog Web Analytics and want to share your professional services analyst and try to attract customers to your blog. Some of the actions you should take to try to achieve your objectives are:
Post specific content on Web Analytics on your blog regularly.
Post specific items related Blogs and have visibility that accept guest posts.
Being present in the best business platforms related to your niche.
I propose a strategy to three months where you are able to get 30 reviews with many links (do-follow / no-follow) on authority sites related to your niche.
For this you must do is to use the following combination of footprints:
“Powered by WordPress” + “Google Analytics” and site: .com
I explain in detail the input data:
“Powered By WordPress” footprint introduce this because I have found that most people who have a blog related to Online Marketing which are often under the WordPress platform.
“Google Analytics” This footprint use it because I want you to give me back only the specific sites where we talk about Google Analytics and WordPress are also low.
Site:.com: I use this advanced search operator to select the domains that end in .com alone.
If you want more curl curl you can even get the following footprint:
“Powered by WordPress” + “Marketing” and intext: “guest post”
You see, here a more general search is made. With this we ensure any findings made in WordPress Blog Marketing talk and you will surely have some section or have already been invited papers.
Now once you understand a little operation of Scrapebox footprints just open and insert everything you’ve learned.
4. Check the quality of the links found.
Once you’ve done your keyword research and ScrapeBox has made you first list of URLs should start making a check of some basic data on each of the links. In addition you must also use the APIs Moz.
STEP 1 >> Play with Domains: The strategy that I have raised I spoke guest articles, relationships and authority pages. In this case you should just forget to look out for in the articles and focus on the entire domain . For this you must go to 0pción of “Trim to Root” is in “Manage Lists” to cut all URLs and domains to be alone.
STEP 2 >> Eliminates duplication: Once the previous step, you must click “Remove / Filter” >> “Remove Duplicate URL” for e liminal duplicate domains.
STEP 3 >> Check PageRank: You must check the PageRank of each URL to ensure that this indicator exceeds 2 or 3. This means that the domain is one that has minimal authority. This you can see in the section “Get Domain PageRank”.
STEP 4 >> Check the Domain Authority (DA): This should be done with the “Page Autority Addon” option. Besides domain authority, it also gives you the authority information page, the Moz Rank and external links.
STEP 5 >> Get emails from your list of URLs: One option that I like about ScrapeBox. With the option “Grab Emails locally from list”, you’ll get a list of all the emails from each of the owners of those domains so you can contact them to agree to get reviews, guest articles and synergies. Definitely an option that saves you hours of work.
5. Eliminate duplication using this tool
Another very useful feature of ScrapeBox. With this option you can detect duplicated URLs when you’re doing work for detection of toxic bonds.
To do a job Audit links, especially when you’re doing it on a site penalized for links, you must do a thorough job and do this work based on when different sources:
- You can do this with the tool Link Detox.
- Combining it with a craft with the help of Ahrefs.
- Combining it with manual labor also based on when data Open Site Explorer.
You see, doing this job, you would have three separate files linked to interesting content. One of the main objectives would be to combine these 3 files into 1 single and eliminate duplication of links or repeated domains.
To do this you will use the addon “DupeRemove”. Once installed and running you will get a window like this:
Now go to the “Select source files to merge” and select the 3 files you want to combine into one. After selecting the files containing information of the links you must click on the “Merge Files” option to merge the files into one.
Finally to remove duplicate domains or URLs, you need to click on the “Select Source File” and choose where you want expoertar URLs or domains unduplicated.
6. Detect “META” tags for each URL on your list
ScrapeBox also allows you scrapear titles and descriptions from a list of URLs. This chooses located within the option “Grab / Check” the “Grab harvested meta info from URL list” option.
This can have a quick view of the title, the description and keywords for each of the URLs in the list that you imported target.
This will be useful if you export this information to an Excel file and and you use it to check the number of pages that use an exact match key word in the title or some other over-optimization that is not well seen by Google.
Once you detect these over-optimization you must do is change all those titles for more natural titles.
7. Check the validity of each of your internal and external links.
Another practice that should be done on a website when it is carrying out an audit or simply when you want to test different aspects of the web, is to check the status of links, both internal and external.
To perform this test you must use the addon “Scrapebox Checker Alive”. This option failitara us this task.
To detect any problems with your links you just have to set this up properly. This simply adds response codes 200 and 301 as well and check the “Follow Relocation” box.
This program will mark you as erroneous all links return a different response code 200 and 301 .
8. Check if your URL changes in your project are being indexed by Google.
If you are making major changes in your project related to the total amount of internal pages, you’ll want to make sure that Google re-re-index all new URLs you entered and all the changes you’ve implemented.
To make sure all this I recommend combining Screaming Frog ScrapeBox tool , now I tell you how …
STEP 1: The first thing to do is scrapear your project with Screaming Frog, by setting options that let you Basci following images:
Once Screaming Frog has completed the process, you should keep export data to a CSV file, and later open it and copy it to ScrapeBox. Another option is also valid CSV convert that into a .txt and then import it as you see in the image below:
After importing the list of URLs you just have to click the button “Check indexed” and select the option to “Google Indexed”
As a result of having done well these 3 steps, the tool will return a pop up where you will have all the information indexed and non-indexed URLs.
I recommend you export the list of URLs not indexed to analyze in more detail.
Outbound links 9. Check your list of links obtained.
As a final option that I really like, I tell you that with ScrapeBox can also find outbound links and Outbound links for each of the URLs obtained result of having done a job search sites add your reviews as I have spoken to in point 3 of this article.
For this there is an option in the list of addons called “Outbound link checker”. This option will show the outgoing links of each of the URLs obtained after doing a search for places to insert your links or reviews.
As you see in the picture, you return some useful data. Once obtained this information you can also check the authority of these URLs and decide for yourself if they are valid or not.
ScrapeBox. My conclusions
You see, even though many professionals or “not so professional” ScrapeBox considered as a SPAM tool is actually a very powerful tool that will save you time on your daily tasks and other advanced tasks. As I mentioned throughout the article you can use it to do well or you can use it to make other practices, all depends on the person who is at the head of this tool.
I recommend you to read when eschuches or an industry professional badmouth ScrapeBox think that perhaps you have not used this tool or unknown many of its features as I just like to show throughout this article.
If you liked this article and want to know more information and advanced options for this software I recommend that you consult a Chuiso, the only person I know who knows by heart this tool and who has helped me discover a host of advanced options.
Finally I would like to know what you use your ScrapeBox and what benefits you reported in your daily life.