VikiTronBot/1.0
Indexing RobotIdentification & Policies
How our crawler works, what sources it collects, how it respects robots.txt and how you can contact us.
Identification
User-Agent:
VikiTronBot/1.0 (+https://vikitron.com/bot)Contact:
abuse@bizkithub.com(DMCA/abuse)support@bizkithub.com(technical questions)
IP Ranges: Will be published here; we recommend whitelisting at CIDR level.
Crawling Policies
We respect robots.txt, noindex and nofollow directives.
Crawl Politeness:
- Adaptive speed, default 1-2 req/s per host
- Considers response time and error rates
- Exponential backoff, max 3 retry attempts
Freshness:
- Key pages checked more frequently
- Multi-tier schedule based on change frequency
Collected Data
We collect:
- DNS records (public)
- robots.txt files
- Certificates from TLS handshake
- Public HTTP metadata (headers, status codes)
- Certificate Transparency log excerpts
- Reverse DNS data
We do NOT store:
- Content behind authentication
- Personal data from forms
Opt-out / Restrictions
Block in robots.txt:
User-agent: VikiTronBot Disallow: /
Reduce speed: Use
Crawl-delay directive.Sensitive directories: Prefer combination of
Disallow and server rules (401/403).Test Your Robots.txt
See how VikiTronBot interprets your robots.txt file. Validate syntax and test URLs against your rules.
Check Your Robots.txtBot Verification
We send From:/User-Agent headers as per identification above.
On request, we can fetch verification URL/token: https://example.com/.well-known/vikitron-verify.txt
