• For every page that cannot be crawled successfully, a network error code is stored. This is only possible, if there are no network errors during crawling. Source vs. Target Network Error A network error can occur on both ends of a link, the source… more »
  • For every page that can be crawled an HTTP code is returned and stored. This is only possible, if there are no network errors during crawling. LRT Smart In LRT Smart the column HTTP-Code is called "Source HTTP Code" and is enabled by default.… more »
  • Recrawling of all the links, every time You get fresh data 1-30 days, not 5 years old data. With you can be 100% sure that you base your decisions on accurate link data. Because we re-crawl all the links for you before you see them. That's why it takes… more »
  • Depending on the interpretation of META tags and Robots.txt, for different types of links and redirects, LRT displays different status icons next to each link in your reports. more »
  • To help you quickly detect if certain URLs are blocked by robots.txt, we provide detailed robot.txt metrics for every source page. Metric Description Robots.txt Bots txt allows general bots for currents URL Robots.txt Googlebot txt allows Googlebot for… more »


© 2009 - 2019 All Content and Intellectual Property is under Copyright Protection.

All logos, trademarks, and registered trademarks are the property of their respective owners.
LinkResearchTools, Link Detox and other related brand names are registered trademarks and are protected by international trademark laws.
Registered trademarks include USPTO 86150169, 86116738, 86116703, 85924832 and EU CTM EU011756021, EU012297503, EU012297552, EU012450813. Unauthorized trademark use is prohibited.
Link Detox® is a product by LinkResearchTools®ContactHelp