[{"categories":["BIND","Security"],"contents":"At ISC, we sincerely value the contributions of our users, and security researchers, who analyze and probe our software for vulnerabilities. BIND, in particular, has had a long career, and the BIND project has issued 6 - 7 CVEs on average for the past 26 years. Since 1999, we have published 186 BIND vulnerabilities, so we have ample experience with vulnerablity reporting, triage, and remediation.\nImpact of LLMs on the volume of security reports Since the start of 2026, we have seen a sharp uptick in reports of potential security vulnerabilities. Other open source projects popular with security researchers have been experiencing the same surge in reports, mostly due to the easy access to LLM tools for code analysis.\nLike others have, we have found a high incidence of false positives in these reports, and the burden of triaging one alarming report after another has severely impacted our team. Currently, we are still accepting reports from anyone, and doing our best to reproduce the \u0026lsquo;findings\u0026rsquo; but we may have to stop accepting reports without a working reproduction test. But first, a little more data.\nFor example, here is a nice chart from the YesWeHack project for BIND, graciously funded as part of the FOSSEPS Preparatory Action, managed by the European Commission. The project was launched in December 2025, and we recently paused it, in order to give ourselves time to \u0026lsquo;catch up\u0026rsquo; with the influx of reports.\nThe above chart shows the explosion of reports from just this one source, the YesWeHack program, with a total now of 150 vulnerability reports. Of these, we have determined that 8 are valid reports, and a further 8 are still under investigation. All the rest turned out to be false positives. This is a decent benchmark of the rate of invalid reports to valid reports, because these reporters are well motivated, and operating under fairly strict rules established by the YWH platform. The timeframe also overlaps nicely with the timeframe for LLM-assisted security research, so it is a good benchmark of the impact of LLMs.\nTotal Reports to date from YWH Accepted, actionable reports Still in triage Dismissed reports 150 8 8 134 (89%) Volume-wise, this is about as many reports in three months as we might normally see in ten years, with a high rate of false positives. Is this an emergency for us? Absolutely. Are we committed to riding this wave, and staying on the board - yes. Our users should brace for a record number of security releases in 2026 and 2027, not only from us, but from all your open source providers.\nWho is reporting vulnerabilities? One thing that the LLMs have changed is, there are now a lot more people hunting for vulnerabilities. The LLMs have made it much easier to generate a fancy, credible-looking report, and it is now easier to generate a report than it is to read and assess it. The result is a kind of asymmetrical burden on the maintainers. I am hopeful that in a few months the LLMs will have improved to the point that they may be helpful in finding solutions, rather than just finding problems, but at the moment, to lean on an analogy from the current state of warfare, we are fighting manufactured drone swarms with an arsenal of a very few hand-crafted missiles (our staff).\nHere is some recent data on BIND issues marked as confidential, which is the best way to identify our current queue of potential security vulnerabilities. These are the confidential issues as of April 15, 2026. The number obviously changes daily, as new issues are submitted, and some of the existing issues are investigated and found to be false reports.\nReported by # of confidental issues Security researchers 21 YesWeHack 12 Internal maintainers 15 Actual open source users 5 Those attributed to security researchers represent a relatively small number of people who tend to investigate a class of possible weaknesses and report several issues at once. A lot of these historically have tended to be industry-wide DNS protocol weaknesses, which are more difficult to remediate, and often require coordination with other open source DNS software systems. Mostly, these people are motivated by the desire to publish a technical paper, and/or present at a technical conference for career advancement.\nThe YesWeHack reporters are a mixed group, but the successful reports have come from a handful of investigators, who are working for the bounties provided by our sponsor. All of those we plan to fix as security vulnerabilities, and some we are still investigating, we have moved into our repo as confidential issues.\nThe minority of issues are reported by actual users of our open source, and a couple of these are really misunderstandings, but we haven\u0026rsquo;t closed the issues yet in case the users have further questions. These are the least likely to turn out to be actual vulnerabilities, and often do not include as much detail as reports from researchers, but nevertheless we take them very seriously.\nReported by Average issue duration (months) Security researchers 7 YesWeHack 0 Internal maintainers 25 Actual open source users 7 From the data table above, you can see that the longest open issues are those opened by the BIND maintainers themselves. These are fairly likely to be actually valid reports, because they are opened by people with very specific expertise, who are not motivated to spam the project: yet we struggle to find the time to fully investigate these, because of the volume of external reports. We are still triaging the older issues, so obviously, if we thought they were specifically dangerous, we would have addressed them sooner.\nCommon problems in LLM-generated security reports The problem with these LLM-generated security reports is, the code analysis on its own is not enough to determine an actual vulnerability. A finding by code analysis is merely the starting point for a good security report.\n1. False positive findings. These findings are factually incorrect about the code behavior. This is the most common LLM error type we see.\nExamples:\nPROXY Protocol Address Spoofing (proxystream.c, proxyudp.c) \u0026ndash; reported as HIGH severity. This is Not exploitable. The scanner claims \u0026ldquo;no built-in authentication or ACL check on who is allowed to send PROXYv2 headers.\u0026rdquo; This is wrong:\nPROXY protocol requires explicit opt-in via listen-on ... proxy plain (not enabled by default) allow-proxy ACL defaults to none (deny all), documented in ARM ACL enforcement uses the real peer address, not the spoofed PROXY address (lib/ns/client.c:2231-2234) ISCCC GETC16/GETC32 Out-of-Bounds Read (isccc/util.h) \u0026ndash; reported as HIGH. This is Not exploitable. The GETC16 and GETC32 macros are dead code \u0026ndash; never called anywhere in the codebase. The macros that ARE used (GET8, GET16, GET32) have proper bounds checking in every caller within lib/isccc/cc.c. Additionally, the rndc control channel has IP-based ACL, 32KiB message limit, and TSIG authentication before any parsing occurs.\n2. Real issue, already mitigated or otherwise unreachable. These findings describe real code behavior, but existing mitigations prevent exploitation.\nExamples (we have many, many more):\nHTTP/2 Stream Exhaustion (http.c) \u0026ndash; reported as HIGH. The 50-stream window before flood detection is real However: DoH requires explicit opt-in (listen-on ... http), per-stream memory is bounded (65535 bytes max, Content-Length validated before allocation), and multiple layers of quotas exist (tcp-clients, http-listener-clients, http-streams-per-connection, idle timeouts). Worst-case memory per connection is ~6.5MB, bounded by connection limits.\nHIP Integer Overflow (hip_55.c) \u0026ndash; reported as MEDIUM False positive.** hit_len is uint8_t (max 255), key_len is uint16_t (max 65535). Sum is 65790, trivially fits in size_t. Both zero-length checks are present. DNS RDLENGTH (16-bit) further constrains values.\n3. Real defects, that are however minor, and not exploitable. In some cases, fixing these is not worth the code churn. Examples:\nACL Mutex Deadlock on Recursive ACLs (acl.c) This is Real but LOW severity. insecure_prefix_lock is a non-recursive mutex (PTHREAD_MUTEX_ADAPTIVE_NP on Linux). Nested ACLs in named.conf will deadlock during configuration loading. This is a self-DoS from the administrator\u0026rsquo;s own configuration, not remotely triggerable. Only affects the dns_acl_isinsecure() warning function, not access control decisions.\nfromwire_keydata No Minimum Length (keydata_65533.c) Real but LOW severity. Accepts zero-length KEYDATA records. However, KEYDATA (type 65533) is a private, non-standard type used only internally for RFC 5011 trust anchor management, not received from external sources in normal operation. tostruct_keydata has defensive checks.\n4. Simply not a problem These are the most upsetting reports, where the proposed \u0026lsquo;vulnerability\u0026rsquo; is just how the software is intended to work. We can only assume these reporters didn\u0026rsquo;t even read their reports critically before submitting them.\nExample:\nRRL TCP Credit Bypass (rrl.c) \u0026ndash; reported as HIGH This is By design. TCP responses are intentionally exempt from rate limiting because TCP requires a completed handshake proving the client owns its source IP. An attacker spoofing UDP source IPs for amplification cannot complete a TCP handshake from those IPs. The TCP credit only disables QPS scaling \u0026ndash; base per-client rate limits still apply fully.\n5. Non-production code. Quite a few reports identify \u0026ldquo;vulnerabilities\u0026rdquo; that are found in test harnesses, fuzz drivers and utility scripts. These include hardcoded keys in fuzz harnesses, stack allocations in test clients, deprecated APIs in test vectors, etc. None affect production deployments.\nMOST OF THESE FALSE POSITIVES CAN BE AVOIDED BY SIMPLY REPRODUCING THE SUPPOSED PROBLEM IN AN ACTUAL SOFTWARE TEST.\nWhat is ISC\u0026rsquo;s committment? We ask reporters to Notify us as soon as possible after you discover a real or potential security issue. Provide us a reasonable amount of time to resolve the issue before you disclose it publicly. Do not submit a high volume of low-quality reports. ISC\u0026rsquo;s commitment Within 3 business days, we will acknowledge that your report has been received. If you do not hear back from ISC within 3 days, please consider that there has been a communication failure and re-submit your report. To the best of our ability, we will confirm the existence of the vulnerability to you and be as transparent as possible about what steps we are taking during the remediation process, including on issues or challenges that may delay resolution. We will maintain an open dialogue to discuss issues and we request that reporters also be responsive to questions we may have to clarify the issue or verify the solution. When we publish a security advisory, we will acknowledge the reporter. Typically we ask the reporter how they would like to be acknowledged. We attempt to publish our vulnerabilities and release fixes on a measured cadence to achieve the best result in operational practice. This is based on our long experience in managing critical infrastructure software with a large user base. Releasing patches at random or frequent intervals puts excessive burden on operators, so we request that reporters be patient with our disclosure policy. We have published several documents that we hope help researchers assessing our software security:\nISC Software Defect and Vulnerability Disclosure Policy ISC CVSS Scoring Guidelines BIND 9 Source Access and Contributor Guidelines Security.md in the BIND repository The BIND Administrative Reference Manual is online, including a section on Security Assumptions. Tips for Effective Reports We would like to offer some tips on making your security reports more useful, for those reporters who are motivated to actually help open source security.\n1. Submit an issue directly in our development repo First the basics: while we accept reports via email, we also get a tremendous amount of spam, and our spam filters sometimes reject security reports sent via email. If you have a serious report, it is worth taking the time to make an account on our open GitLab repository and submit an issue there. This has the further advantage that, having submitted the issue yourself, you will be copied on all the updates and be able to work with us as we triage and attempt to reproduce your report.\n2. Use the %^\u0026amp;! Security Issue form In our repository, we have a form designed specifically for reporting security issues. I know everyone thinks the forms are for \u0026lsquo;other people\u0026rsquo;, we all want to skip the annoying form, but in this case, skipping the form just forces us to ask you the same questions again, after you open your issue, because we actually need the information on the form. When you open a new issue, there is a pulldown to select a form for a bug submission, feature request, or security issue. One further benefit of using the security issue form is that the issue is automatically marked as CONFIDENTIAL. It is important to mark issues as confidential that may turn out to be vulnerabilities, yet even self-described researchers sometimes submit these in the open. (We got a security vulnerability reported in the open in our repo earlier this week, which we closed as soon as we saw it.)\nHere is the Security Issue form:\nClick to expand Instructions If you found the problem using generative AI tools and you have verified it yourself to be true: write the report yourself and explain the problem as you have learned it. This makes sure the AI-generated inaccuracies and invented issues are filtered out early before they waste more people\u0026rsquo;s time. Even if you write the report yourself, you must make sure to reveal the fact that the generative AI was used in your report.\nAs we take security reports seriously, we investigate each report with priority. This work is both time- and energy-consuming and pulls us away from doing other meaningful work. Fake and otherwise made-up security problems effectively prevent us from doing real project work and make us waste time and resources.\nWe ban users immediately who submit fake reports to the project.\nSummary Concisely summarize the bug encountered, preferably in one paragraph or less.\nBIND versions affected Make sure you are testing with the latest supported version of BIND. See https://kb.isc.org/docs/supported-platforms for the current list. The latest source is available from https://www.isc.org/download/#BIND\nPaste the output of named -V here.\nPreconditions and assumptions Is a specific setup needed?\nPlease check the BIND Security Assumptions chapter in the ARM: https://bind9.readthedocs.io/en/latest/chapter7.html#security-assumptions\nE.g. DNSSEC validation must be disabled, etc. E.g. Resolver must be configured to forward to attacker\u0026rsquo;s server via DNS-over-TLS, etc. E.g. Authoritative server must be configured to transfer specific primary zone. E.g. Attacker must be in posession of a key authorized to modify at least one zone. E.g. Attacker can affect system clock on the server running BIND.\nAttacker\u0026rsquo;s abilities What resources does an attacker need to have under their control to mount this attack?\nE.g. If attacking an authoritative server, does the attacked have to have prior relationship with it? \u0026ldquo;The authoritative server under attack needs to transfer a malicious zone from attacker\u0026rsquo;s authoritative server via TLS.\u0026rdquo;\nE.g. If attacking a resolver, does the attacker need the ability to send arbitrary queries to the resolver under attack? Do they need to also control an authoritative server at the same time?\nImpact Who or what is the victim of the attack and what is the impact?\nIs a third party receiving many packets generated by a reflection attack?\nIf the affected party is the BIND server itself, please quantify the impact on legitimate clients: E.g. After launching the attack, the answers-per-second metric for legitimate traffic drops to 1/1000 within the first minute of the attack.\nSteps to reproduce This is extremely important! Be precise and use itemized lists, please.\nEven if a default configuration is affected, please include the full configuration files you were testing with.\nExample:\nUse the attached configuration file\nStart the BIND server with command: named -g -c named.conf ...\nSimulate legitimate clients using the command dnsperf -S1 -d legit-queries ...\nSimulate attack traffic using the command dnsperf -S1 -d attack-queries ...\nWhat is the current bug behavior? Examples: Legitimate QPS drops 1000x. Memory consumption increases out of bounds and the server crashes. The server crashes immediately.\nWhat is the expected correct behavior? If the attack causes resource exhaustion, what do you think the correct behavior should be? Should BIND refuse to process more requests?\nWhat heuristic do you propose to distinguish legitimate and attack traffic?\nRelevant logs Please provide log files from your testing. Include full named logs and also the output from any testing tools (e.g. dnsperf, DNS Shotgun, kxdpgun, etc.)\nIf multiple log files are needed, make sure all the files have matching timestamps so we can correlate log events across log files.\nIn the case of resource exhaustion attacks, please also include system monitoring data. You can use https://gitlab.isc.org/isc-projects/resource-monitor/ to gather system-wide statistics.\nCoordination Does this issue affect multiple implementations? Issues affecting multiple implementations require very careful coordination. We have to make sure the information does not leak to the public until vendors are ready to release fixed versions. If it is a multi-vendor issue, we need to know about the situation as soon as possible to start the (confidential!) coordination process within DNS-OARC and other suitable fora.\nPlease list implementations you have tested.\nHave you shared the information with anyone else? Have you informed other affected vendors? Or maybe submitted a paper for review?\nWhat is your plan to publicize this issue? E.g. we plan to go public during conference XYZ on 20XX-XX-XX\nAcknowledgements Please specify whether and how you would like to be publicly credited with discovering the issue. We normally use the format: First_name Last_name, Company_or_Team.\n3. Include explicit steps required to reproduce One advantage we have with BIND, is that it is an application, not a module or library, and therefore, it is relatively easy to test the proposed vulnerability on a real system to determine whether it is actually expoitable. Surprisingly few of these LLM-enabled bug reporters seem to bother to do this, unfortunately. Still fewer include complete instructions for reproducing the issue in a real system. Please don\u0026rsquo;t insist on using some weird Docker you cooked up or a proprietary configuration you cannot share. Please don\u0026rsquo;t share some fake \u0026rsquo;test script\u0026rsquo; that you haven\u0026rsquo;t actually tried to run yourself - the LLMs will happily generate a test that proves nothing.\n4. READ the report. This is the most important step!! We have gotten reports that sincerely state that there is a serious problem, when anyone who understands how DNS works - would know that the observed behavior is exactly as designed. For example, one recent report boiled down to \u0026ldquo;the authoritative server for a domain can update nameservers for that domain over TCP\u0026rdquo;. Sure, the LLM claimed that if someone could compromise the authoritative server, they could update the authoritative information, but any human reader would know, this is simply how the DNS works.\nAnother report we just got claims cache poisioning. Now, this would be a truly alarming vulnerability - much worse than our usual DDOS. But in order to accomplish this attack, the attacker needs to be able to spoof responses between the resolver and authoritative server, which basically means, the entire system is compromised. This is not a software bug.\nPlease don\u0026rsquo;t submit reports like this. We are wading through dozens of these reports, on an urgent basis, and some of them are real. When we read one that the submitter clearly hasn\u0026rsquo;t even bothered to read themselves, well, it is very frustrating. We don\u0026rsquo;t want to have to ban reporters but we may soon start to ban reports from people who repeatedly waste our time with complete nonsense.\n5. Go ahead, make a maintainer\u0026rsquo;s day! I don\u0026rsquo;t think we have ever had a security report submitted that included both a system test reproducer and a serious, usable working patch, but we dream about it. Maybe someone reading this will make history and do that??\nExample of a real, effective security report Here is an example of a security issue, that did turn out to be a CVE, that we have patched and published. This is a good example of a terse, but complete and effective report that used the form. (If you click on the link above you will see the actual issue in our repository, which includes our standard checklist for patching and publishing a vulnerability.)\nNote in particular the complete information about the software version in which the bug was found. Note the Reproducer test included! Note the absence of a long, flowery detailed description that does not help to establish that there is an actual bug in production. Click to expand [Issue 4480](https://gitlab.isc.org/isc-projects/bind9/-/issues/4480) ### Summary Authoritative servers with a KEY RR in a zone or validating resolvers are vulnerable to trivial CPU exhaustion attacks using SIG(0) protocol.\nBIND versions affected All versions with SIG(0) support. Tested configuration:\nBIND 9.19.19-dev (Development Release) \u0026lt;id:de2009e\u0026gt; compiled by GCC 13.2.1 20230801 compiled with OpenSSL version: OpenSSL 3.1.4 24 Oct 2023 linked to OpenSSL version: OpenSSL 3.1.4 24 Oct 2023 DNSSEC algorithms: RSASHA1 NSEC3RSASHA1 RSASHA256 RSASHA512 ECDSAP256SHA256 ECDSAP384SHA384 ED25519 ED448 DS algorithms: SHA-1 SHA-256 SHA-384 HMAC algorithms: HMAC-MD5 HMAC-SHA1 HMAC-SHA224 HMAC-SHA256 HMAC-SHA384 HMAC-SHA512 TKEY mode 2 support (Diffie-Hellman): no TKEY mode 3 support (GSS-API): yes ~\u0026ldquo;Affects v9.19\u0026rdquo;: c8c0f4bba993e267772292440456af2a483e6ad5 ~\u0026ldquo;Affects v9.18\u0026rdquo;: 44e4b5cb4acc18918c1b9772dd71c019ddf5273d ~\u0026ldquo;Affects v9.16\u0026rdquo;: 161d69aba357fa830bb6ef2b097b0447929041f0 ~\u0026ldquo;Affects v9.11 (EoL)\u0026rdquo;: v9.11.37-S1 Preconditions and assumptions Server must be configured with at least one KEY RR in a zone OR KEY RR must be in cache - with trust level secure (DNSSEC-validated record). Example configuration: https://jpmens.net/2010/12/01/securing-dynamic-dns-updates-ddns-with-sig0/ Essentially any zone with KEY RR in it should suffice. If an attacker has ability to add KEY RR he can prepare conditions for the attack. Attacker is able to find DNS name of the KEY RR. Attacker\u0026rsquo;s abilities Attacker simply needs to send a signed message which refers the existing KEY RR in one of the zones OR in cache. The signature can be invalid (any syntactically valid message is sufficient).\nImpact Inability to respond to legitimate queries, CPU resource exhaustion on the server. The impact gets higher with number of KEY RRs on the same name - it seems we are trying all KEY RRs until we find a match or exhaust all options.\nValues were measured on build from the bind-9.18 branch, listed as thousands QPS, named running on a single CPU core to make measurement easier.\nState QPS [k] legitimate cache hits - before the attack 70 attack 1 legitimate traffic when the attack is ongoing 10 Steps to reproduce Beware: Test over UDP. TCP also suffers from #4481 which throws results off.\nConfigure BIND to be authoritative for a zone containing KEY RR:\nnamed.conf example.zone Start BIND server with command: named -g -c named.conf -n1 (single thread to simplify measurement) Simulate legitimate clients using: yes '. A' | dnsperf -S1 -O suppress=timeout -c 256 Simulate attack traffic using attached script \u0026amp; data file: python udploop.py 127.0.0.1 53 sig0-bad-query.tcpdns --report-interval 1 udploop.py sig0-bad-query.tcpdns Please note the binary containing queries does NOT have a valid signature. It\u0026rsquo;s a valid message which pretends to be signed with the KEY RR from the zone file, but it\u0026rsquo;s not a valid signature.\nWhat is the current bug behavior? Legitimate QPS drops significantly. Impact on legit traffic QPS is roughly 60x higher than for an ordinary query without SIG(0). Side effect: log spam with one log line per attacker\u0026rsquo;s message What is the expected correct behavior? Probably limit on resources we are willing to spend on SIG(0), or expensive crypto in general.\nPossibly also smaller TCP buffer size (or smaller buffers elsewhere, I don\u0026rsquo;t exactly know where the messages are being buffered).\nRelevant logs This particular variant of attack logs one message per query:\nrequest has invalid signature: RRSIG failed to verify (BADSIG) Summary We do recognize that the new, powerful AI-enabled tools will find many software vulnerabilities that have remained hidden for years. We want to find these, and fix them, as soon as possible, because we know that adversaries can also use these tools. Please, if you are doing security research on open source software, consider whether you are willing to take the time to make reports that actually help the open source project you are submitting them to. If it is not worth your time to verify the hypothesis you have generated with a real test, please do not burden others with that task.\n","permalink":"/blogs/2026-04-16-How-to-report-a-vulnerability/","tags":["BIND","Security","Open Source"],"title":"How to report a security vulnerability, 101"},{"categories":null,"contents":"Most video recordings are also available on our YouTube channel. To be notified of upcoming webinars, follow ISC on Mastodon, LinkedIn, Facebook, or Bluesky, or subscribe to the training-info mailing list.\nPresentations Date Link CENTR 54 - New DELEGation - Petr Špaček April 8, 2026 Slides, Recording unavailable ICANN 85 - Generalized DNS Notifications in BIND 9 - Matthijs Mekking March 11, 2026 Slides, Recording RIPE 91 - Open-Source Release Engineering - Ondřej Surý October 22, 2025 Slides, Recording DNS-OARC 45 - DNS Best Current Practices - a DNS-OARC Community initiative - Matthijs Mekking October 8, 2025 Slides, Recording BSDCan - F-Root on BSD: Alpha to Now - Dan Mahoney June 14, 2025 Slides, Recording ICANN 83 Policy Forum - DELEG: What Does It Mean for DNSSEC? - Petr Špaček June 9, 2025 Slides, Recording ICANN 83 Policy Forum - PQC for DNSSEC: the Good, the Bad, and the Ugly - Ondřej Surý June 9, 2025 Slides, Recording CENTR Jamboree - What's New in BIND 9.20 and Beyond? - Colin Vidal May 23, 2025 Slides, Recording unavailable RIPE 90 - SIEVE - An Eviction Algorithm Simpler than LRU for Web Caches - Ondřej Surý May 16, 2025 Slides, Recording RIPE 90 - How to Make BIND 9 Fast(er) and More Scaleable - Ondřej Surý May 15, 2025 Slides, Recording RIPE 90 - Money and Open Source Sustainability - Vicky Risk May 15, 2025 Slides, Recording RIPE 90 - DNSSEC Post-Quantum Crypto Algorithms in BIND - Ondřej Surý May 14, 2025 Slides, Recording ICANN Policy Forum - US EO Calls for Encrypting DNS - Vicky Risk March 12, 2025 Slides, Recording (~ minute 28) DNS-OARC44 - Devil's Advocacy for DoH - Aydin Mercan February 2, 2025 Slides, Recording DNS-OARC44 - Realistic Benchmarking of DNS Resolver Cache Policies - Štěpán Balážik February 7, 2025 Slides, Recording DNS-OARC44 - Thinking about Serve Stale - Cathy Almond February 6, 2025 Slides, Recording FOSDEM 2025 - How to Make BIND 9 Fast(er) - Ondřej Surý February 2, 2025 Slides, Recording CSNOG 2025 - Měření výkonu přenosu DNS zón (Measuring the Performance of DNS Zone Transfers) - Petr Špaček January 21, 2025 Slides, Recording (in Czech) ICANN 81 - Introducing RSSAC and the DNS Root Server System - Jeff Osborn November 10, 2024 Slides, Recording OARC 43 - CVSS Scoring Often Does Not Work Well for DNS - Cathy Almond October 27, 2024 Slides, Recording OARC 43 - Zone Transfer Benchmarks - Petr Špaček October 26, 2024 Slides, ","permalink":"/presentations/","tags":null,"title":"Presentations and Webinars"},{"categories":["ISC","Open Source"],"contents":"In the past, we\u0026rsquo;ve profiled some of ISC\u0026rsquo;s software and support engineers, and even our product manager. But what about some of our non-technical staff? Let\u0026rsquo;s meet T. Marc Jones, our Director of Sales!\nT. Marc Jones has been in the Internet industry since the 1990s, when the commercial Internet was just starting to be a \u0026ldquo;thing.\u0026rdquo; He started out in sales at UUNET Technologies, one of the first Internet Service Providers, and also spent time at Cable and Wireless, Verizon, and Cogent before joining ISC in 2013.\nAlthough his previous work was all in the for-profit sector, T. Marc believes in the open source mission. He says, \u0026ldquo;I think it\u0026rsquo;s a really cool thing to work on open source projects that are so widely used and a critical part of a successful and functional Internet.\u0026rdquo; He especially enjoys his colleagues at ISC: \u0026ldquo;We have a unique model where all our staff are globally distributed and remote from one another. Still, we have been able to create a close, professional, and very effective work (and personal relationship) environment. I hadn’t experienced anything like this in my prior work experiences.\u0026rdquo;\nT. Marc is responsible for sales and the primary ISC revenue-generating activities. He adds, ”In this role, I hope to continue growing the number of ISC customers and contributing to the ISC cause.\u0026quot;\nT. Marc is yet another multidisciplinary ISC staff member: he holds a Bachelor of Economics with a Minor in Philosophy from Lehigh University, where he was also a talented team athlete. He says, \u0026ldquo;I bring that sense of teamwork and fellowship to ISC. I see our customers, business partners, and user community as team members too, and I want everyone to feel good about their association with ISC.\u0026rdquo;\nHis smooth personality and melodious voice are natural assets in sales, but T. Marc is also a professional High Performance Driving Coach! He says, \u0026ldquo;I teach aspiring racing drivers the fundamentals of high performance racetrack driving. I am certified with the Motorsport Safety Foundation and enjoy national credentials to coach anywhere in the United States!\u0026rdquo;\nWhen he\u0026rsquo;s not zooming around the track, you might find T. Marc riding his bike around the San Francisco Bay area. Wherever he is, he\u0026rsquo;s a huge asset to ISC and we are fortunate to have him on our team.\nAs always, we thank you for reading this profile; our goal is to help our customers and users get to know ISC a little bit better. We welcome your feedback at marketing@isc.org, and thank you for using ISC’s software!\n","permalink":"/blogs/isc-profile-jones/","tags":["Open Source","Profiles"],"title":"Meet an ISC Staff Member – T. Marc Jones!"},{"categories":null,"contents":"Rick Adams, Chair Richard L. Adams, Jr. is the founder of UUNET Technologies, the first commercial Internet Service Provider (ISP) and one of the largest Internet traffic carriers in the world in the 1990s. Among other things, Adams’ accomplishments at the helm of UUNET include the invention of Serial Line Internet Protocol (SLIP), technology that allows personal computers to connect to the Internet via modems. In the early 1980s, 3Com’s UNET Unix system could exchange TCP/IP traffic over serial lines; in 1984 Adams implemented this system on Berkeley Unix 4.2 and dubbed it SLIP. The SLIP protocol was documented in RFC 1055. The SLIP protocol was superseded in the early 1990s by the Point-to-Point Protocol (PPP), which is still in use.\nRick Adams founded a non-profit telecommunications company, UUNET Communications Service, to reduce the cost of mail and Usenet traffic sent by UUCP. (UUNET was founded with a $50,000 loan from the USENIX Association, which was subsequently repaid.) UUNET became an official gateway between UUCP mail and Internet email, as well as between North America and Europe. It hosted many related services, such as Internet FTP access for its UUCP clients and the comp.sources.unix archives.\nAdams spun out a for-profit company, UUNET Technologies, which was the first ISP in the United States. The for-profit company bought the assets of the non-profit, repaying it with a share of the profits over the years. The non-profit has spent that money for many UNIX-related charitable causes over the years, such as supporting the Internet Software Consortium. The for-profit ISP became a multi-billion-dollar company and made an initial public offering in 1995. It was acquired by MFS (Metropolitan Fiber Systems, a wide-area optical-networking company) in 1996; MFS was subsequently acquired by WorldCom, which rose to challenge the largest telecommunications companies in America.\nAdams co-authored the O’Reilly book !%@:: A Directory of Electronic Mail Addressing \u0026amp; Networks with his wife Donnalyn Frey. He is also co-author of RFC 1036, the Standard for Interchange of USENET Messages.\nHe obtained a master’s degree in computer science from Purdue University.\nFred Baker, Former Board Member - In memoriam Fred Baker passed away suddenly on June 18, 2025, while serving as an active member of ISC\u0026rsquo;s BOD. Read our blog post here.\nFred Baker was involved in the data communications industry since 1978, working for Control Data Corporation, Vitalink Communications, Advanced Computer Communications, and Cisco Systems. His stint at Cisco was from 1994 until 2016. After that, he worked as a contractor, notably for the Internet Society and ISC. Starting in 2017, he represented ISC in the Root Server System Advisory Committee of ICANN, and as of 2018 was a co-chair of that committee.\nSince 1989, Baker was involved with the Internet Engineering Task Force (IETF), the body that develops standards for the Internet. He chaired a number of IETF working groups, including several that specified the management information bases (MIB) used to manage network bridges and popular telecommunications links. Baker served as IETF chair from 1996 to 2001, when he was succeeded by Harald Tveit Alvestrand, and he served on the Internet Architecture Board from 1996 through 2002. He co-authored or edited at least 60 Request for Comments (RFC) documents on Internet protocols, and contributed to others. The subjects covered include network management, Open Shortest Path First (OSPF) and Routing Information Protocol (RIPv2) routing, quality of service (using both the Integrated Services and Differentiated Services models), Lawful Interception, precedence-based services on the Internet, and others.\nIn addition, he served as a member of the Board of Trustees of the Internet Society from 2002 through 2008, and as its chair from 2002 through 2006. He was a member of the Technical Advisory Council of the US Federal Communications Commission in 2004. He worked as liaison to other standards organizations such as the ITU-T. In 2009-2010, he served as chair of the RFC Series Oversight Committee.\nStarting in 2005, Baker co-chaired the IPv6 Operations Working Group in the IETF. He represented IETF on the National Institute of Standards and Technology Smart Grid Smart Grid Interoperability Panel and Architecture Committee from 2008-2013, and was Cisco’s representative to a Broadband Internet Technical Advisory Group. Baker also held several patents.\nBaker attended the New Mexico Institute of Mining and Technology from 1970 to 1973.\nDavid J. Farber, Former Board Member - In memoriam Dave Farber passed away on February 7, 2026 at his home in Roppongi, Tokyo. Read our blog post here.\nDavid J. “Dave” Farber was a professor of computer science, noted for his major contributions to programming languages and computer networking. He was a Distinguished Professor at Keio University in Tokyo, Japan and Co-Director of the Cyber Civilization Reseach Center at Keio.\nDr. Farber graduated from the Stevens Institute of Technology in 1956 and began an 11-year career at Bell Laboratories, where he helped design the first electronic switching system (ESS-1) and the SNOBOL programming languages. He subsequently held industry positions at the Rand Corporation and Scientific Data Systems, followed by academic positions at the University of California, Irvine, and the University of Delaware.\nAt Irvine his research work was focused on creating the world’s first operational distributed computer system. While a member of the Electrical Engineering Department of the University of Delaware, he helped conceive and organize the major American research networks CSNET, NSFNet, and the National Research and Education Network (NREN). He helped create the NSF/DARPA-funded Gigabit Network Testbed Initiative and served as the Chairman of the Gigabit Testbed Coordinating Committee.\nDr. Farber subsequently was appointed Alfred Fitler Moore Professor of Telecommunication Systems at the University of Pennsylvania, where he also held appointments as Professor of Business and Public Policy at the Wharton School of Business and as a Faculty Associate of the Annenberg School for Communication. He served as Chief Technologist at the US Federal Communications Commission (2000–2001) while on leave from the university.\nHe also was a Distinguished Career Professor of Computer Science and Public Policy at the School of Computer Science, Heinz College, and Department of Engineering and Public Policy at Carnegie Mellon University.\nHe was a Fellow of the IEEE, ACM, and the AAAS.\nStephen Wolff, Board Member Stephen Wolff is one of the many fathers of the Internet. He is mainly credited with turning the Internet from a government project into something that proved to have both scholarly and commercial interest for the rest of the world. Dr. Wolff realized before most the potential in the Internet and began selling the idea that the Internet could have a profound effect on the commercial world.\nWolff taught Electrical Engineering in The Johns Hopkins University, and subsequently spent fourteen years managing a computing and communications research group at the US Army Ballistic Research Laboratory, which (among other things) ported Berkeley UNIX to a variety of computers of the day including the Denelcor HEP, an early supercomputer; the group also participated in the development and deployment of the ARPANET.\nIn 1986, Wolff was appointed Division Director for Networking and Communications Research and Infrastructure at the National Science Foundation, where he managed the NSFNET project. That included a national backbone network in the US that interconnected NSF sponsored supercomputing centers, regional research and education networks, federal agency networks, and international research and education networks. The regional networks connected to the backbone and in turn interconnected the nation’s colleges, universities, and other public institutions and so provided universal connectivity to the academic community. The NSFNET connected to the ARPANET by adopting the TCP/IP protocols, which became the de facto standard of the nascent Internet.\nIn 1994, Wolff left NSF and joined Cisco, where he continued to support academic networking research and infrastructure. From 2011 until his retirement in 2017 he served as Interim Vice President and Chief Technology Officer, and later Chief Scientist at Internet2, a not-for-profit technology company that provides a backbone network and related services for US higher education.\n","permalink":"/board/","tags":null,"title":"ISC's Board of Directors"},{"categories":["BIND","Open Source"],"contents":"A new configuration option, notify-cfg CDS, was added to BIND 9 in version 9.21.17. This enabled Generalized DNS Notifications for CDS and/or CDNSKEY RRset changes, as specified in RFC 9859. ISC Gitlab reference #5611\nExcerpt from IETF RFC 9859:\nThe original DNS notifications [RFC1996], which are here referred to as \u0026ldquo;NOTIFY(SOA)\u0026rdquo;, are sent from a primary server to a secondary server to minimize the latter\u0026rsquo;s convergence time to a new version of the zone. This mechanism successfully addresses a significant inefficiency in the original protocol.\nToday, similar inefficiencies occur in new use cases, in particular delegation maintenance (DS and NS record updates). Just as in the NOTIFY(SOA) case, a new set of notification types will have a major positive benefit by allowing the DNS infrastructure to completely sidestep these inefficiencies. \u0026hellip;\n\u0026hellip; the notification will speed up processing times by providing the recipient with a hint that a particular child zone has published new CDS, CDNSKEY, and/or CSYNC records.\nMatthijs Mekking gave this talk at the DNSSEC Workshop at ICANN 85 introducing the feature: SLIDES, RECORDING.\n","permalink":"/blogs/2026-generalized-notify/","tags":["open source","BIND"],"title":"Generalized DNS Notifications in BIND 9"},{"categories":["BIND","Performance"],"contents":"In this post, we will show that BIND 9.20 compares favorably with 9.18, providing faster responses to clients, both with a cold cache and after the cache is filled. In low-traffic scenarios, BIND 9.20 can consume more memory than 9.18, but as traffic increases, BIND 9.20 uses less memory than BIND 9.18.\nWe run performance tests continually as part of our development process, but the detailed results are more relevant to the development team than to users. Periodically we publish performance benchmarks so that users considering an upgrade can assess the likely performance change in their own deployment.\nThis article focuses on benchmarking resolver performance, using a methodology that aims to provide near-real-world performance results for resolvers. Our methodology has not changed significantly since our 2021 blog post, but for ease of reading we have repeated the description of the test methodology. If you are already familiar with our test methodology, scroll down until you see the charts for the results.\nAbout Resolver Testing Cache State and Timing Matter Resolvers don\u0026rsquo;t know any DNS answers by themselves; they have to contact authoritative servers to obtain individual bits of information and then use them to assemble the final answer. Resolvers are built around the concept of DNS caching. The cache stores DNS records previously retrieved from authoritative servers. Individual records are stored in a cache up to the time limit specified by the authoritative server (Time To Live, or TTL). Caching greatly improves scalability.\nAny DNS query which can be fully answered from cache (a so-called \u0026ldquo;cache hit\u0026rdquo;) is answered blazingly fast from the DNS resolver\u0026rsquo;s memory. On the other hand, any DNS query which requires a round-trip to authoritative servers (a \u0026ldquo;cache miss\u0026rdquo;) is bound to be orders of magnitude slower. Moreover, cache miss queries consume more resources because the resolver has to keep the intermediate query state in its memory until all information arrives.\nThis very principle of the DNS resolver has significant implications for benchmarking: in theoretical terms, each DNS query potentially changes the state of the DNS resolver cache, depending on its timing. In other words, queries are not independent of each other. Any change to how (and when) we query the resolver can impact measurement results.\nIn more practical terms, this implies a list of variables that we have to replicate:\nA stream of test queries resulting in a realistic cache hit/miss rate. For this purpose, we have to replicate the exact queries and also their timing. Answers returned by authoritative servers, including TTL values. Network conditions between the resolver and authoritative servers (latency, packet loss, etc.). Cache size and other parameters affecting cache hit/miss ratio. The traditional approach implemented, e.g., in ISC\u0026rsquo;s Perflab or using the venerable resperf tool, cannot provide realistic results because it ignores most of these variables.\nThe second implication is that even the traditional QPS metric (queries answered per second) alone is too limited when evaluating resolver performance: it does not express the type of queries, answer sizes and TTLs, query timing, etc.\nYou Can\u0026rsquo;t Simulate the Internet The long list of variables above makes it clear that preparing an isolated laboratory with a realistic test setup is very hard. In fact, ISC and other DNS vendors have learned that it\u0026rsquo;s impossible; realistic resolver benchmarking must be done on the live Internet.\nDevelopers from CZ.NIC Labs wrote a test tool called DNS Shotgun for this purpose. It replays DNS queries from traffic captures and simulates individual DNS clients, including their original query timing. The resolver under test then processes queries as usual, i.e., contacts authoritative servers on the Internet and sends answers back to the simulated clients. DNS Shotgun then receives and analyzes the answers.\nObviously, benchmarking on a live network cannot provide us with perfectly stable results. To counter that, we repeat each test several times and always take fresh measurements instead of using historical data. This process ensures that changes on the Internet and our test system do not skew our comparison.\nFor each test run, we start with a new resolver instance with an empty cache. This way, we simulate the worst case of regular operation: it is as if the resolver was restarted and now has to rebuild its cache from ground zero.\nLet\u0026rsquo;s have a look at the variables we measure and how to interpret them.\nInterpreting Resolver Behavior The QPS metric alone is not particularly meaningful in the context of regular DNS resolver operation. Instead, we measure indications that resolver clients are getting timely answers, and resource consumption on the server.\na) CPU Utilization\nWe monitor time BIND processes spent using the CPU as reported by the Linux kernel Control Group version 2 metric usage_usec, and then normalize the value in a way which gives 100 % utilization = 1 fully utilized CPU. Our test machine has 16 cores, so its theoretical maximum is 1600 %. CPU usage is a cumulative metric and we plot a new data point every 0.1 seconds.\nb) Memory Usage\nWe use the Linux kernel Control Group version 2 metric memory.current to monitor BIND 9\u0026rsquo;s memory consumption. It is documented as \u0026ldquo;the total amount of memory currently being used\u0026rdquo; and thus includes memory used by the kernel itself to support the named process, as well as network buffers used by BIND. Resolution of the resource monitoring data is 0.1 seconds, but the memory consumption metric is a point-in-time value, so hypothetical memory usage spikes shorter than 0.1 seconds would not show on our plots.\nc) Response Latency - How quickly does the resolver respond?\nFinally, we arrive at the most useful but also the most convoluted metric: response latency, which directly affects user experience. Unfortunately, DNS latency is wildly non-linear: most answers will arrive within a split-millisecond range for all cache hits. Latency increases to a range of tens to hundreds of milliseconds for normal cache misses and reaches its maximum, in the range of seconds, for cache misses which force communication with very slow or broken authoritative servers.\nThis inherent nonlinearity also implies that the simplest tools from descriptive statistics do not provide informative results.\nTo deal with this complexity, the fine people from PowerDNS developed a logarithmic percentile histogram which visualizes response latency. It allows us to see things such as:\n95 % of queries were answered within 1 ms (cache hits) 99 % of queries were answered within 100 ms (typical cache misses) 99.5 % of queries were answered within 1000 ms (problematic cache misses) and so on.\nEven more importantly, a logarithmic percentile histogram allows us to compare the latency of various resolver setups visually.\nTest Methodology Data Set and Load Factor For realistic results, we need a realistic query data set. This article presents results measured using traffic captures (of course anonymized!) provided by one European telecommunications operator. We would really love any samples other operators could provide, as diversity in our sample data would make our testing more representative.\nThese traffic captures contain one hour of traffic directed to 10 independent DNS resolvers, all of them with roughly the same influx of queries. In practice, we have 10 PCAP files: the first with queries originally directed for resolver #1, the second with queries directed to resolver #2, etc.\nThese traffic captures define the basic \u0026ldquo;load unit\u0026rdquo; we use throughout this article: traffic directed to one server = load factor 1x. To simulate higher load on the resolver, we simultaneously replay traffic originally directed to N resolvers to our single resolver instance under test, thus increasing load N times. E.g., if we are testing a resolver under load factor 3x, we simultaneously replay traffic originally directed to resolvers #1, #2, and #3.\nThis definition of load factor allows us to avoid theoretical metrics like QPS and simulate realistic scenarios. For example, it allows us to test this scenario: \u0026ldquo;What performance will we get if nine out of 10 resolvers have an outage and the last resolver has to handle all the traffic?\u0026rdquo;1\nTest Design Here is the basic testbed setup we used to compare the most recent release of BIND 9.20 to the most recent BIND 9.18 version. We intentionally are not providing the exact hardware specifications to prevent readers from an undue generalization of results.\nWe use two servers: one to simulate (many) DNS clients using DNS Shotgun, and the other to run the DNS resolver under test. Each machine has 16 logical CPUs (eight physical cores with hyperthreading enabled) and 42 GB of operating memory. DNS Shotgun is configured to replay the original query stream (including timing) from one or more (original) telco resolvers to one machine running a resolver under test, with 2000 ms timeout on the client-side.2 BIND is configured with max-cache-size set to 30 gigabytes. Practically, all other values are left at default settings: the resolver is doing full recursion and DNSSEC validation. Also, the resolver has both IPv4 and IPv6 connectivity. The resolver and client machine always start from a completely clean state; most importantly, the resolver always starts with an empty cache. This approach allows us to measure the worst-case scenario, \u0026ldquo;how quickly will the resolver recover after a restart?\u0026rdquo; In practice, we inspect resolver behavior in the first 600 seconds, and expect service recovery within the first 60 seconds. Of course, 600 seconds is a short test compared to regular resolver uptime. We are focusing on the worst-case scenario, which is an empty cache. Depending on client behavior, the resolver can handle even more load after it has had more time to fill its cache. By starting in a clean state, we ensure that the performance levels described in this article can be safely reached without worrying about system restarts, complicated load balancing, etc. There is one point I cannot stress enough:\nIndividual test results like response rate, answer latency, maximum QPS, etc., are generally valid only for the specific combination of all test parameters, the input data set, and the specific point in time.\nIn other words, results obtained using this method are helpful ONLY for relative comparison between versions, configurations, etc., measured on the exact same setup with precisely the same data and time.\nFor example, a test indicates that a residential ISP setup with a resolver on a 16-core machine can handle 160k QPS. It\u0026rsquo;s not correct to generalize this to another scenario and say, \u0026ldquo;a resolver on the same machine will handle a population of IoT devices with 160k QPS on average\u0026rdquo; because it very much depends on the behavior of the clients. If all of our hypothetical IoT devices query every second for api.vendor.example.com AAAA, the resolver will surely handle the traffic because all queries cause a cache hit. On the other hand, if each device queries for a unique name every second, all queries will cause a cache miss and the throughput will be much lower. Even historical results for the very same setup are not necessarily comparable because \u0026ldquo;something\u0026rdquo; might have changed on the Internet.\nPlease allow me to repeat myself:\nThis test was designed to compare BIND 9.18 to BIND 9.20, handling a specific set of client queries at a specific point in time. Depending on the test parameters and your client population, your results could be completely different, which is why we recommend you test yourself if you can.\nResults We have eight sets of charts, all using the one query data set. We ran the test in a long (600 second) duration, with 1x, 5x and 20x load factors. Then we ran the same test in short (120 second) duration, using load factors 1x, 5x, 10x, 15x and 20x. The longer duration tests are more useful to checking on memory consumption as the cache has time to fully populate, and CPU utilization over time, but the shorter tests are fine for measuring response latency.\nCharts shown in this section show aggregate results from three repetitions of the test on each version. The full line shows the average of the three tests, while the colored background shows the range between minimum and maximum for each version. That is, the wider the color background is, the more unpredictable the characteristic is, and vice versa.\nCPU utilization Let\u0026rsquo;s have a look at CPU load during the longer ten-minute test. We have charts for the base, 5x, and 10x load factors. Remember, our test machine has 16 cores, so its theoretical maximum is 1600 %.\nBase load factor This chart shows CPU load, test times 0 - 600 seconds, with the base load factor. Utilization is around 60% and slightly lower for BIND 9.20 vs 9.18.\n5x load factor This chart shows the same test, with 5x the base load. CPU utilization is around 200%, nearly the same for both versions.\n20x load factor - note the colors are reversed, 9.20 is orange here** In the 20x load factor scenario, on start-up, BIND 9.20 makes better usage of the CPU, spiking much higher than 9.18. CPU utilization stabilizes at 650 - 800% for the duration of the test, slightly higher for the more recent BIND version. (We ran this test after the first batch of tests, which is why the chart is formatted differently.)\nMemory usage Similarly to CPU usage, we use the longer 10-minute test run to look at memory usage.\nBase load factor In our base load factor scenario, BIND 9.20 uses more memory than 9.18. Note that memory usage is expected to grow over time as the cache fills up with records.\n5x load factor In the 5x load factor scenario, BIND 9.20 initially uses less memory than 9.18, while the cache is cold. By the end of the 10-minute test, with the cache fully warm, 9.20 is using more memory, roughly 800 MB vs approximately 750 MB for 9.18. The memory differential with BIND 9.20 is much lower on a busy resolver than on the less-busy base load factor.\n20x load factor - 9.20 is orange here In the 20x load factor scenario, BIND 9.20 again uses quite a bit less memory than 9.18 at startup, when the cache is empty. By the end of the test, both BIND versions are using the same amount of memory (within our test variance). Therefore, a heavily loaded resolver should expect to see no difference in memory usage when moving from 9.18 to 9.20, while running in a steady state.\nResponse latency - How quickly does the resolver respond? These are the most important charts, since the goal of the service is to answer the most clients as quickly as possible. On these charts, the lines that are closer to the left side of the chart are showing lower latency for more users, which is a better result. In the lower right corner you see the large percentage of queries that are answered in two milliseconds or less; these queries are presumably answered from the local cache. Our test is not designed to differentiate at the level of a millisecond or two, so the shape of the curve below two milliseconds is immaterial. As the line rises to the left, you see the percentage of queries that have to wait for resolution via the Internet. Flat lines on the top at the 2000 ms mark show client timeouts.\nCold cache These charts show how BIND responds during the first 60 seconds after startup. The performance of BIND 9.20 is very similar to 9.18 in the base load factor scenario; only about 8% of queries remain to be answered after the first two milliseconds. This means this resolver is attaining a cache hit ratio of approximately 92 % within the first minute of operation.\nAt five times the base load factor, we see the best cache hit rate in our first minute of tests. Cache hit rate is dependent on the actual queries made, but since all of these tests use the same query set, you can see the impact of different query rates. Cache hit rates will increase across the board during the second minute, shown in the section below.\nAs we increase the load factor, the BIND 9.20 line moves further to the left of the BIND 9.18 line. When we test with 10x the base load factor, you can see that BIND 9.20 is answering about 3% more queries from cache than BIND 9.18, which means a smaller percentage of queries are resolved through queries from BIND to the Internet. Fewer than half as many queries time out in the upper left quadrant with BIND 9.20, versus 9.18.\nBy the time we increase the load to 20x our base, the BIND 9.20 server answers about 10% more queries within the first two milliseconds than 9.18. The overall cache hit rate is significantly worse with this heavy load. In the top left quadrant of the chart, you see that a quarter or fewer of the queries to 9.20 time out, compared with 9.18. This chart shows that BIND 9.20 handles the initial startup of a heavily loaded resolver much better than 9.18.\nWarm cache After the first minute, the cache is already populated with records and becomes \u0026ldquo;hot.\u0026rdquo; The following charts show response latency for the second minute of this two-minute test.\nIn the base load factor test, you get about a 1% improvement in answers from cache with BIND 9.20, but overall, response rates are very similar. The blue lines representing 9.20 are fairly consistently to the left of the orange lines, meaning the more recent version has lower latency.\nAs we increase the load to 5x, we see a further increase in the cache hit rate to about 97%. This demonstrates the benefit of running an active, busy resolver in reducing response time for users.\nCache hit rate peaks at 98% in the 10x base load scenario.\nAs the load increases, the number of queries that time out in BIND 9.20 eventually drops (seen in the upper left corner of the chart). These are small differences in the end of the long tail of responses. As expected, this second measurement at our highest traffic rate has a much higher cache hit rate, about 97%, than the measurement during the first minute of the test, which was 70-80%.\nOur conclusion is that BIND 9.20, which included significant refactoring, including adoption of qp-trie in place of the red-black tree database, did not incur any resolver performance penalty, and actually provides lower-latency responses to clients than 9.18 did, particularly during the cold cache phase of operation.\nReferences Release notes for BIND 9.20.0. BIND Administrative Reference Manual for 9.20.0. We have updated the BIND Significant Features Matrix. ISC’s Software Support Policy and Version Numbering explains the release cycle. To simulate higher load factors, we slice and replay the traffic using the method described in this video presentation about DNS Shotgun around time 7:20. Most importantly, this method retains the original query timing and realistically simulates N-times more load. This method works under the assumption that the additional traffic we simulate behaves the same way as the traffic we already have. I.e., if you have 100,000 clients already, the assumption is that the next 100,000 will behave similarly. This assumption allows us to reuse slices of the original traffic capture from 10 resolvers to simulate the load on 20 resolvers.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe DNS Shotgun timeout of 2 s was selected to reflect a typical timeout on the client side. BIND uses an internal timeout of 10 s to resolve queries; the resolver continues resolving the query even after the client has given up. This extra time allows the resolver to find answers even with very broken authoritative setups and cache them. These answers are then available when the clients ask again.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/2026-03-05-bind-resolver-performance/","tags":["BIND","Performance"],"title":"BIND 9.20 Resolver performance compared to 9.18"},{"categories":["Kea","Stork","Compliance","Security"],"contents":"In mid-2025 ISC contracted with OSTIF to identify an external organization to audit our Kea and Stork code for security issues. Kea is a modern DHCP server, and Stork is a graphical management tool for DHCP and DNS, and both projects are open source under the MPL 2.0 license.\nWorking with OSTIF, we identified four key goals for the Kea audit Kea is a fairly stable and mature codebase, without a lot of external dependencies. Kea has an extensive API and is often deployed with an SQL database for external data storage. When Kea was first developed, network management applications could be assumed to be deployed on a \u0026ldquo;bastion host,\u0026rdquo; in a protected network segment. However, since then the focus for network security has shifted to a defense-in-depth approach. We wanted to:\nDevelop a threat model for Kea to identify the likely attack points. Audit the source code, specifically including the extensive API and the connections with external systems, such as databases, which we felt were logical attack surfaces. We hadn\u0026rsquo;t had the time to set up fuzz testing for Kea, so OSTIF suggested they could focus on finding an auditor with specific expertise in that area to help establish a fuzz-testing program. We also had not yet generated Software Bills of Materials (SBOMs) for Kea, and we also hoped to get some help with that. OSTIF recommended Ada Logics for this audit. In addition to their specific security expertise, this team has extensive experience with fuzzing open source projects, and familiarity with generating SBOMs. After a meeting to introduce ourselves, clarify our objectives, and organize our communications, the auditors worked largely independently of the Kea team.\nKea audit results The review included scanning the source code with several analyzers that Kea was not using at the time, as well as some manual inspection. The auditors also delivered a working fuzz-testing project, set up with Google\u0026rsquo;s OSS-Fuzz system. With the analysis and fuzzing, the inspectors found a total of six potential issues. These were all assessed to be of \u0026ldquo;informational\u0026rdquo; severity: a couple are opportunities for enhancements to Kea. An example is a suggestion that we should enable multi-factor authentication for Kea administrators, which was already under consideration.\nAt the conclusion of the audit the Ada Logics team transferred ownership of the fuzzing project to the Kea team, so we will receive any test findings and will be able to modify and extend the tests going forward.\nThe full audit report from Ada Logics is available on the OSTIF website.\nWe had slightly different goals for the Stork audit Stork is a graphical management utility with a web interface. It is written in Go, using Angular and PrimeNG as significant user interface dependencies. We had recently completed a major upgrade of those components, and asked the auditors to specifically study our exposure to possible vulnerabilities in these components. Stork provides privileged access to both Kea and any monitored DNS servers, so we requested a special focus on our handling of these secrets and enforcement of access controls. We think most of our Stork users may be relying on our packages, and because of our significant dependencies on external projects, we wanted to explore enhancing our supply-chain tracking and security.\nOSTIF recommended 7ASecurity for this study. Their approach is more of a white box pen-testing approach than manual code inspection, which we thought was appropriate for a web-based product.\n7ASecurity proposed seven \u0026ldquo;work packages\u0026rdquo; for this project:\nWhitebox Tests Against Stork Web Frontend and API Whitebox Tests Against Stork Backend, Agents, and Auth Logic Whitebox Tests Against Stork Supply Chain Analysis Implementation Lightweight Threat Model for Stork and Kea Integration Points Stork Dependency Management Review \u0026amp; SBOM Creation Stork Config Parser Fuzzing and Test Case Creation Stork Audit Results The 7ASecurity team identified seven potential vulnerabilities, two of which they scored as high severity. None of these met ISC\u0026rsquo;s requirements for a CVE, but we prioritized fixing all of them. ISC has implemented fixes for all seven vulnerabilities and 7ASecurity has verified the fixes. These fixes were included in the Stork 2.4.0 release, which was posted on February 25, 2026. The 7ASecurity team agreed to delay publication of their report until after the release.\n7A made five recommendations for hardening Stork. ISC has developed fixes that address four of these: the fifth, a recommendation that we implement multi-factor authentication for Stork users, was already on our roadmap, and is being tracked as Stork issue number 2210.\nThe auditors assessed Stork\u0026rsquo;s compliance vs the SLSA Framework versions 0.1 and 1.0. We do not yet have any clear requirements for supply chain security from our users, but we are anticipating this may become a compliance requirement soon. The SLSA assessment was complicated by the fact that ISC\u0026rsquo;s testing and package building infrastructure is not exposed on the public Internet, and we had given 7ASecurity access only to our public Stork development repository. 7A judged that Stork met the SLSA 0.1 requirements, and met the SLSA 1.0 framework requirements for the Level 1 criteria. We have opened a few issues to address some outstanding changes to help us achieve Level 2, which addresses the build service as well as automatically generating, storing, and publishing build artifacts alongside our packages.\nWe encourage users to read the full audit report from 7ASecurity. We have opened GitLab issues to track all of the 7ASecurity recommendations, which can be found in our Stork repository by doing a search for issues with the label \u0026ldquo;sec-audit2.\u0026rdquo;\nAcknowledgements These audits were part of a project funded through the ICANN Grant Program. ICANN is a nonprofit public benefit corporation established in 1998. Its mission is to ensure a stable, secure, and unified global Internet by coordinating the allocation and management of Internet Protocol (IP) addresses, domain names, and protocol parameters.\n","permalink":"/blogs/2026-two-code-audits/","tags":["Kea","Stork","compliance","security"],"title":"Kea and Stork Projects Audited"},{"categories":["BIND"],"contents":"During the first week of February (2026), I ran several experiments using Claude Code. These included 1) asking Claude to analyze and recommend changes to BIND 9, 2) creating several prototype new projects, 3) advising me on home construction, and 4) helping me prepare materials for a university class I am teaching.\nThe BIND 9 Experiment [nb: We have recently established Guidelines for Tool-Generated Content for the BIND project.]\nI imported the BIND 9 source code and let Claude Code analyse it. Then I asked it to find problems in the source code with a focus on security issues, and also to modernize the code, and then fix these problems one by one using git worktrees.\nI did not make any changes in BIND based on Claude\u0026rsquo;s recommendations. My evaluation, at the end of this experiment, was that Claude generated technically correct but practically useless code: when analyzing the BIND 9 source code, Claude found \u0026ldquo;issues\u0026rdquo; like reserved identifiers and potential integer overflows that compilers already handle. It was technically sound, but completely useless and a waste of time in a mature, real-world context.\nFirst, I asked Claude to analyze the libisc component of BIND 9.\nThe following branches were created in separate worktrees:\n~/Projects/bind9-fix-endian 0a70abfeeb [fix/endian-inline-functions] ~/Projects/bind9-fix-error 4638ad205b [fix/error-atomic-callbacks] ~/Projects/bind9-fix-magic ed2ea6d675 [fix/magic-shift-ub] ~/Projects/bind9-fix-pause 8f7f43b098 [fix/pause-reserved-id] ~/Projects/bind9-fix-proxy2-comment 469ec1580b [fix/proxy2-comment] ~/Projects/bind9-fix-proxy2-range fa14f684fd [fix/proxy2-range-check] ~/Projects/bind9-fix-readline b3b55efd20 [fix/readline-malloc-check] ~/Projects/bind9-fix-serial d9e3925e5e [fix/serial-symmetric-checks] Let\u0026rsquo;s go through all of them:\nendian-inline-functions In this branch Claude Code suggested to modernize the code to use inline functions instead of macros. The branch contains a lot of changes like this:\n-#define ISC_U16TO8_LE(p, v) \\ - (p)[0] = (uint8_t)((v)); \\ - (p)[1] = (uint8_t)((v) \u0026gt;\u0026gt; 8); +static inline void +ISC_U16TO8_LE(uint8_t *p, uint16_t v) { + p[0] = (uint8_t)(v); + p[1] = (uint8_t)(v \u0026gt;\u0026gt; 8); +} Generally speaking, using inline functions has the advantage over macros that one can get better backtraces, but personally, I would not use inline functions for endianness macros like this. Most probably there is no performance impact, but these functions are used in low-level cryptography routines and meddling with them would require extra measurements that would consume time that\u0026rsquo;s better spent elsewhere.\nClaude Code produced the following commit message:\nConvert multi-statement endian macros to inline functions The ISC_U*TO8_* write macros were unsafe multi-statement macros that could break in conditional contexts: #define ISC_U16TO8_LE(p, v) \\ (p)[0] = (uint8_t)((v)); \\ (p)[1] = (uint8_t)((v) \u0026gt;\u0026gt; 8); If used in: if (x) ISC_U16TO8_LE(p, v); Only the first statement would be conditional. Traditional fix: do { ... } while(0) wrappers Modern C17/C23 fix: Use static inline functions for type safety and proper statement semantics. Converted all 8 write macros to inline functions: - ISC_U16TO8_LE/BE (uint8_t *p, uint16_t v) - ISC_U32TO8_LE/BE (uint8_t *p, uint32_t v) - ISC_U48TO8_LE/BE (uint8_t *p, uint64_t v) - ISC_U64TO8_LE/BE (uint8_t *p, uint64_t v) Read macros (ISC_U8TO*) remain as expression macros since they return values by expression, not side effects. Benefits over do-while wrappers: - Type safety: compiler checks pointer and value types - Better debugging: functions show in stack traces - No semicolon footgun with do-while - Modern C idiom (inline functions are standard since C99) As a side note, there was no do-while in the entire header, so I have no idea what it is talking about. 🤷\nerror-atomic-callbacks In this commit, the Claude Code changed our assertion callbacks from function pointers to atomic function pointers. While technically correct, the setting of the \u0026ldquo;fatal\u0026rdquo; callbacks is always done before the threading starts and then we never change them. While not enforced, the functions isc_error_setunexpected and isc_error_setfatal are meant to be called when setting up the program, not in the middle. Here the Claude Code went with formalism instead of understanding how the functions are being used.\nHere\u0026rsquo;s the associated commit message:\nFix data race in error.c callback setters using C11 atomics The error callback pointers were stored in static globals and accessed without synchronization, creating a data race if callbacks were changed while errors were being reported from other threads. Before (data race): static isc_errorcallback_t unexpected_callback = ...; void isc_error_setunexpected(cb) { unexpected_callback = cb; // unsynchronized write } void isc_error_unexpected(...) { unexpected_callback(...); // unsynchronized read } After (thread-safe): static _Atomic(isc_errorcallback_t) unexpected_callback = ...; void isc_error_setunexpected(cb) { atomic_store(\u0026amp;unexpected_callback, cb); } void isc_error_unexpected(...) { cb = atomic_load(\u0026amp;unexpected_callback); cb(...); } Using C11 atomics (_Atomic type qualifier with atomic_store/atomic_load) ensures proper memory ordering and eliminates the race condition. Note: Direct initialization without ATOMIC_VAR_INIT() is used, as that macro was deprecated in C17 and removed in C23. Modern C allows normal initialization of atomic objects. As a side note, I believe I had to push it to not use ATOMIC_VAR_INIT(). Also there is really no data race.\nfix/magic-shift-ub Here, Claude Code caught a potential signed integer overflow \u0026ndash; except that it also correctly notices there is no such thing in the whole code base and the compiler is going to yell at us if this ever happens.\nAgain, here\u0026rsquo;s the associated commit message:\nFix signed integer overflow UB in ISC_MAGIC macro The ISC_MAGIC macro constructed magic numbers by left-shifting potentially signed integer arguments: #define ISC_MAGIC(a, b, c, d) ((a) \u0026lt;\u0026lt; 24 | (b) \u0026lt;\u0026lt; 16 | ...) If any byte value \u0026gt;= 0x80 (e.g., 0x80 \u0026lt;\u0026lt; 24 = 0x80000000), this caused signed integer overflow, which is undefined behavior in C. Fixed by casting to uint8_t (to ensure byte values) and then to uint32_t before shifting, ensuring all operations use unsigned arithmetic: #define ISC_MAGIC(a, b, c, d) \\ (((uint32_t)(uint8_t)(a) \u0026lt;\u0026lt; 24) | ...) All current uses pass ASCII characters (\u0026lt; 0x80), so behavior is unchanged. This fix prevents UB if the macro is ever used with high-bit bytes. Again, the change is technically very sound and is correct, but it is also useless and a waste of time.\nfix/pause-reserved-id The pattern of being correct and useless at the same time can be seen in this proposed fix. Again, Claude Code is not wrong that double underscores are reserved by the C standard; however, the use of the __pause variable is not causing any problems right now, and at the same time there are more uses of this pattern in isc/uv.h, isc/util.h, and many other headers.\nProposed commit message follows:\nFix reserved identifier in isc_pause_n macro The isc_pause_n macro used __pause as a loop counter variable, which is a reserved identifier in C (double underscore prefix is reserved for the implementation). #define isc_pause_n(iters) \\ for (size_t __pause = 0; __pause \u0026lt; iters; __pause++) { ... } This violates C standard section 7.1.3 (Reserved identifiers). Fixed by renaming to _isc_pause_i, which uses the single underscore + lowercase letter pattern that\u0026#39;s allowed in function scope: #define isc_pause_n(iters) \\ for (size_t _isc_pause_i = 0; _isc_pause_i \u0026lt; iters; ...) { ... } This macro is only used in spinlock_test.c and rwlock_test.c. Again, 🤷, whatever.\nfix/proxy2-comment Wrong comment. OK. Is it worth a separate commit and separate MR? Probably not.\nProposed commit message:\nFix copy-paste error in proxy2.c IPv6 comment The comment in the ISC_PROXY2_AF_INET6 case block incorrectly said \u0026#34;IPv4 source and destination endpoint addresses\u0026#34; when it should say \u0026#34;IPv6\u0026#34;. This was a copy-paste error from the IPv4 case block above. No functional change, documentation fix only. fix/proxy2-range-check Here, Claude Code correctly detected an edge condition where a valid oversized DNS packet was rejected and prepared a unit test for the condition. This is one of the few proposed commits that might end up being merged as it has some merit. At the same time, the real error here is that we should probably sanitize oversized queries earlier and just not pass them downstream.\nFix proxy2 header range check to allow maximum valid payload The check in isc_proxy2_make_header() was rejecting valid headers with payload sizes between 65520-65535 bytes. The PROXYv2 header has a uint16_t len field that stores the payload size (total_size - ISC_PROXY2_HEADER_SIZE). The maximum valid len is UINT16_MAX (65535), so the maximum valid total_size is 65535 + 16 = 65551 bytes. The old check `total_size \u0026gt; UINT16_MAX` rejected anything \u0026gt; 65535, incorrectly rejecting valid total_sizes from 65536 to 65551. Fixed by checking `(total_size - ISC_PROXY2_HEADER_SIZE) \u0026gt; UINT16_MAX` instead, which correctly validates that the wire-format len field fits in uint16_t. Added test case proxyheader_max_payload_size_test to verify maximum valid payload is accepted and one byte over is rejected. Still a nit, not a bug.\nfix/readline-malloc-check Again, in this branch, we are in the territory of bugs, but meh. Claude Code correctly detected that the readline shim, which we use when BSD\u0026rsquo;s libedit is not available, has an unchecked return. At the same time, the readline is only used in command-line tools (nslookup and nsupdate), so the chance of hitting this bug is infinitesimally small.\nFix NULL pointer dereference in readline() fallback The fallback readline() implementation (used when libedit is not available) called fgets() with a potentially NULL buffer if malloc() failed. This would cause undefined behavior in nsupdate and nslookup interactive modes on systems without libedit when memory allocation fails. Fixed by checking malloc() return value before use. Note: This fallback is not compiled when HAVE_LIBEDIT is defined, which is the case for most builds. However, this fix ensures portability to systems without libedit support. Is this worth a separate MR? Probably not, but we might throw it together into a umbrella MR when fixing small things.\nfix/readline-malloc-check Claude Code makes the assumption that isc_serial_gt() lacks an explicit check for an undefined case. The serial arithmetic was committed on 1999-08-30 and is formally correct as Claude Code notices itself, but it claims it works \u0026ldquo;by accident.\u0026rdquo; The code was committed by Mark Andrews and it correctly covers all the cases, so I disagree with the assessment that it works \u0026ldquo;by accident.\u0026rdquo; I agree only that the checks are inconsistent. Sure\u0026hellip;\nClaude Code then tries to justify the change in so many words:\nAdd explicit undefined-case check to isc_serial_gt for symmetry Per RFC 1982, serial number comparisons are undefined when i1 == i2 + 2^(SERIAL_BITS - 1), i.e., when the numbers are exactly halfway around the 32-bit number circle. isc_serial_lt() had an explicit check for this undefined case: if (a == (b ^ 0x80000000U)) { return false; } isc_serial_gt() lacked this check, relying on the fact that (int32_t)(a - b) would be INT32_MIN (which is \u0026lt; 0) when a and b are 0x80000000 apart, so the `\u0026gt; 0` check would correctly return false. While the code worked correctly by accident, this was inconsistent and made the undefined-case handling implicit rather than explicit. Added the same explicit check to isc_serial_gt() for: - Code symmetry with isc_serial_lt() - Self-documenting behavior per RFC 1982 - Explicit handling rather than relying on INT32_MIN arithmetic No functional change: both before and after return false for the undefined case. This is purely a code clarity improvement. Again, this falls into the \u0026ldquo;meh, maybe when we are changing this code\u0026rdquo; bucket.\nVarious Notes on the BIND Experiment I asked Claude to write down notes about the context every time it came up with something, so I could recover the context in case it compressed the context window and forgot what it did before. This takes a long time, where you have to either stare at the screen while it contemplate things or you have to go do something else. The first approach wastes a lot of time, the second approach causes one\u0026rsquo;s brain to do a lot of context switching. I started with a Claude Pro subscription, but I ran out of tokens very quickly and I had to upgrade to Max subscription, and then I ran out of tokens again. At that point I already had gathered enough data. Code reviews are generally less rewarding and more demanding than creating new code. When writing a new feature, fixing a bug, or refactoring old code, there\u0026rsquo;s the joy of doing something useful. You go through the process of understanding the problem, understanding the problem area, finding a solution for the problem, and then successfully solving the problem. Code reviews require you to do all the preparation steps, but in the end, there\u0026rsquo;s no joy that comes from creating something new. With other people on the team, I know my peers and I know our strengths and weaknesses. With an LLM, it is a statistical model that tends to overstate its results, doubles-down when you point out an error, and aims to please you. Any code produced by the LLM has to be understood and rigorously reviewed. DNS Secure Telemetry Experiment I asked Claude Code to write a telemetry system that could be plugged into various packages and leak as little metadata as possible. I proposed to use libsodium on the client side and libsodium, libuv, and libldns on the server side.\nThis experiment ended with a slightly better result than the experiment with BIND 9. Claude Code was able to produce a prototype of both the client and the server, but then it often hit the wall when some part didn\u0026rsquo;t work as it didn\u0026rsquo;t have a complex understanding of the environment. When the LLM encounters a problem, it often starts doing various experiments and quite often it comes to the wrong conclusion and rewrites large parts of the code, and that all without truly understanding the problem it ran into.\nAs an extra experiment I cross-checked the code with Google Gemini and ChatGPT: each of the LLMs found fault with the other\u0026rsquo;s output. Overall, this AI rap battle was very frustrating, and in the end I felt like a secretary to our robot overlords.\nIn the end, I ended up with a prototype code that I had to refactor because the overall code quality was mediocre, there was a lot of repeated code and the code was frankly ugly.\nMy conclusion is that for fast prototyping this might work, but in the end, an experienced software engineer must come, read and understand the whole codebase, refactor the code for future maintainability, and make sure there is no nonsense in the code. My overall feeling is that the whole process took more time than it would have if writing the whole thing myself from scratch. But I must admit that having a quick prototype is invigorating in the beginning.\nDNS Load Balancer Experiment I asked Claude Code to write a DNS Load Balancer using Rust. I suggested it use the Domain crate for DNS, Tokio for networking, and Wasmer to create WASM plugins.\nIn a way, I was impressed by the result. Claude Code was able to produce a prototype DNS Load Balancer when fed with the right prompts and forced to look at the right resources.\nThat said, the current state of things are: I know only a little Rust, and I know nothing about the libraries used or code patterns, so I have two paths forward:\nRead the whole source code, learn Rust and the libraries as I go, reuse the nice bits and rewrite the rest. Give up on understanding the code base and continue to develop the Load Balancer using the LLM. The first option is sustainable and will lead to a codebase that is understood and when there\u0026rsquo;s a bug or security issue, there will always be someone that understands at least part of the code and can fix the issues. Whether this process will be tool-assisted or not would be a choice of the person doing the work.\nCivil Engineering Experiment I bought a house and it is undergoing an extensive renovation. I know nothing about civil engineering, so using my University license for Google Gemini, I started talking to the LLM about the choices the human civil engineers made. I also used it to compare offers for various technologies (heat pump, HVAC, floor heating, exterior shutters, windows, \u0026hellip;).\nUsing an LLM for these tasks was actually very helpful, and my understanding is that a good LLM can be better than a bad civil engineer, but an experienced site supervisor is much better than a good LLM. I ended up firing my civil engineer because they were making bad choices for my project, as pointed out by the site supervisor.\nThat said, when working with an LLM, one has to be extremely distrustful. LLMs are good with words but bad with facts, and they are designed to please, so one must design the prompts carefully. I ended up asking \u0026ldquo;are you doubling-down again\u0026rdquo; as a followup to almost every response. So, not all was good. Gemini kept pushing that the H-shaped steel profile needed special handling in the walls because the cold and heat might make it shrink and expand and the plaster might crack. It only dropped the idea when I pushed it to actually calculate the difference in length - it was 1.3 mm, so basically nothing.\nThere\u0026rsquo;s also this annoying feature - all of the LLMs try to be helpful to the point of sounding like Donkey from Shrek. The constant nagging \u0026ldquo;Do you want me to also do foo or bar?\u0026rdquo; at the end of every answer quickly becomes annoying. FFS, sometimes I just want an honest answer.\nOverall, using the LLM to compare various offers was great, it noticed subtle differences between the offers, summarized them neatly and it helped me to pick the best offer. The overall helpfulness might also be due to the fact that I don\u0026rsquo;t have a degree in civil engineering. At the same time, you can\u0026rsquo;t really trust everything it says and it is starting to be difficult to double-check the facts with the Internet because it is now full of LLM-generated content.\nTeaching Experiment So far, this was the most successful experiment for me. I\u0026rsquo;ve inherited some university class curricula and I am using Google Gemini to create speaker notes for the slides, add interactive questions, and create quizzes from existing materials.\nNot all is peachy: it basically ruined student assignments - I got a lot of LLM-generated content and had to give it back, not because of the use of AI, but because the assignments lacked university-level quality (correct citations mostly).\nAgain as an anecdote, Google Gemini made up a non-existent Czech word for a cryptography term. When pushed, it created another non-existing Czech word and only when pushed again it finally admitted there\u0026rsquo;s no such Czech term and the English term is normally used.\nFinal Notes I\u0026rsquo;ve used an LLM in three general areas:\nAnalysing and fixing an existing code base - the results were poor. Writing code in an unfamilar language for something I am an expert on - the results seem to be solid, but I can\u0026rsquo;t really assess the quality of the code (yet) because I am unfamiliar with the language. Asking questions about topics where are I am not an expert. This was mostly helpful (socially), but since I am not an expert in the area, I can\u0026rsquo;t really assess the quality (technically). My anecdotal evidence shows that using an LLM is good for quick prototyping, helping with understanding unfamilar areas of the codebase, and helping with tedious simple tasks. It is not so helpful when it runs into problems - the LLM often runs in circles, makes bad decisions (probably because it lacks enough context), or even produces wrong results.\nWhen it gives you the result you want or you expected, it feels nice, but I am aware that one must be wary of the echo chamber mode of LLM. It abuses human biases. When the LLM goes wrong, my frustration level goes up quicker than it would with a human. I attribute this to the fact that you know that the normal human interaction would end with: \u0026ldquo;Ok, you need to study this and learn this and then we will talk again,\u0026rdquo; and the person would be humble about things they don\u0026rsquo;t know. With an LLM, there\u0026rsquo;s no such thing. The improvements are only made when there\u0026rsquo;s a new model and while you can work with the context window, the moment the context gets compressed or you start a new context it is all lost. It is as if you were given an inexperienced intern every time.\nPersonally, I don\u0026rsquo;t think using an LLM for large tasks saves any time unless you want to completely rely on it. On the contrary, most of the tasks I have tried took longer since you have to spend time explaining it to the LLM, then you get the results, and then you need to carefully consider and review the results and circle back.\n","permalink":"/blogs/2026-02-26-bind9-claude/","tags":["BIND","AI","Open Source\""],"title":"Claude Code Experiments with BIND 9"},{"categories":["Kea","Open Source","Stork"],"contents":"Stork 2.4 is a new stable version, significantly updated in several key ways. We addressed 220 issues since Stork 2.2.0, 81 of them in this release. Stork 2.4 required major changes to transition to updated versions of the PrimeNG and Angular components we depend on. The change in the connection model for Kea created a lot of work for Stork, and we have significantly expanded our DNS support. Finally, we had a Stork security audit/pentest in December, and we are fixing a number of issues raised by that in this release. For details, see the Stork 2.4 release notes. The blog below focuses on feature changes that can be seen in the user interface.\nDirect API support for Kea 3.0 and later versions The Kea Control Agent (CA) is deprecated as of Kea 3.0; the DHCP daemons can now receive commands directly (without the CA). Supporting this CA-less mode required a major refactoring for Stork, which relied heavily on the CA for managing Kea instances.\nNow Stork connects directly with the daemon it is managing (e.g. DHCPv4, DHCPv6 or DDNS). The CA is listed as just another daemon in the interface now. In the screenshots below, as throughout the Stork user interface, you no longer see \u0026ldquo;Apps\u0026rdquo;; these have been replaced with \u0026ldquo;Daemons\u0026rdquo;.\nDNS Support The DNS support in Stork has been greatly expanded. Stork can monitor the daemons, retrieve and display the configuration files, and transfer zone contents from both BIND 9 and PowerDNS servers. (We anticipate that PowerDNS may be a popular choice for Kea users, because both systems feature a backend SQL database architecture for data storage.)\nBIND 9 server configuration viewing Stork can parse and display a BIND 9 configuration file, as it has been able to do for Kea. This capability has been extended and tested with a range of different configurations. In the screenshot below, you can see we have retrieved and parsed a configuration (using rndc).\nZone viewing Stork provides a graphical view of all zones configured, with built-in zones, RPZ zones and mirror zones explicitly tagged for filtering. The administrator can view all the resource records in a zone. This feature uses the standardized and widely-interoperable zone transfer protocol AXFR.\nThe screenshot below shows a view of the local zones on one BIND server, with a filter applied to hide the built-in zones. The zone type is displayed, including RPZ, and a zone that has a mis-matched serial number is highlighted. The overlay shows the zone refresh status of the three configured DNS servers, one of which is a PowerDNS server.\nClicking on a zone, such as the zone bind9.example.com in the screen above, displays a list of the daemons and views that zone is found in. Because Stork has a view into multiple BIND servers, this enables the administrator to easily see which servers are up to date, and which are not.\nClicking on the zone contents in the screen above, displays the zone contents, which in this case are minimal.\nWhile the example above was a very small zone, there are many use cases where DNS zones have hundreds or thousands of records. We have worked hard to mitigate the obvious scalability challenges for collecting high volume data and displaying it in a graphical user interface. In the screenshot below, Stork has collected and is displaying a mirror root zone with over 24,000 records. We are caching the records locally in Stork, while providing a button to allow the administrator to manually refresh the data. (Pushing that button triggers an AXFR).\nUser Interface A major focus for this release is mostly invisible. We have updated our two major interface components, Angular and PrimeNG. These uplifts required quite a bit of refactoring to keep user interface features working.\nSortable tables! The Stork UI presents a lot of information in the form of tables, and this release introduces the ability to sort those tables by clicking on a column header. We have added hide-able filter panels above our data tables to make the view more compact and useful.\nSubnet and Shared Networks Views The subnets view has gotten more sophisticated, with warnings for out of pool addresses. The example below shows the popup with separate pool utilization statistics for prefix delegation. Here you can also see the up and down arrows in the column headers in the table, showing the new table sorting feature and the hideable filter panel.\nPrefix delegations are displayed in the shared networks view as well, which also offers the filter panel and column sorting.\nSoftware Versions Tracking If you haven\u0026rsquo;t seen it, you might check out the software versions tracking in Stork. Stork maintains information about current versions of BIND and Kea, and will identify older versions throughout the interface. If there is a more recent version that fixes a CVE, that is is highlighted with a higher severity. In addition, there is a page under the monitoring menu that shows all the current software versions.\nOn-line Stork demonstration system We invite you to try out our on-line Stork demo at: https://demo.stork.isc.org/login. This demo version was updated with Stork 2.4.0. Note that the configuration resets frequently to wipe out changes made by prior demo users. If you would prefer a more extended trial, you can install our packaged demo version, which includes a DHCP traffic generator, a handful of virtual Kea servers, and of course the Stork agents and server. One of our QA engineers has recorded a short video that walks you through the entire installation process for the demo.\nDownload and Installation To download Stork sources, go to ISC\u0026rsquo;s Downloads page. Release notes are posted beside the tarball download link. Most users prefer to install our packages for Debian, Alpine, or RPM-based systems from ISC\u0026rsquo;s Cloudsmith repo.\nThe Stork Quickstart Guide will get you up and running quickly. The Stork Administrator Reference Manual (ARM) may also be helpful.\nStork Professional Technical Support Available With this release, ISC is now offering professional technical support for Stork. This will be offered as a line-item on a Kea DHCP support contract. Support for Kea is available as an annual subscription, with several levels:\nBronze - Premium and Subscriber hooks; ASNs; email support during business hours Silver - Premium, Subscriber, and the Role-Based Access Control (RBAC) hook; ASNs; 24x7 email and phone support with SLAs Gold - Premium, Subscriber, and the RBAC hook; ASNs; 24x7 email and phone support with faster response times The annual cost of our Kea support subscriptions is based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet. Please feel free to contact us for more information.\nReferences Stork online demo system Stork-users mailing list Stork Administrator Reference Manual (ARM) Stork Release Roadmap ","permalink":"/blogs/stork-2-4/","tags":["Kea","Stork"],"title":"Stork 2.4: Open Source DHCP Management Tool"},{"categories":["ISC","Company News"],"contents":"\nDave Farber, ISC Board Member from 1994 - 2026\nISC and the entire Internet community are mourning the loss of our friend and colleague, Dave Farber. David J. Farber (April 17, 1934 – February 7, 2026) was one of the original members of the ISC Board of Directors, appointed at ISC\u0026rsquo;s incorporation in 1994.\nDave was a professor of computer science, noted for his major contributions to programming languages and computer networking.\nDr. Farber graduated from the Stevens Institute of Technology in 1956 and began an 11-year career at Bell Laboratories, where he helped design the first electronic switching system (ESS-1) and the SNOBOL programming languages. He subsequently held industry positions at the Rand Corporation and Scientific Data Systems, followed by academic positions at the University of California, Irvine, and the University of Delaware.\nAt Irvine his research work was focused on creating the world’s first operational distributed computer system. While a member of the Electrical Engineering Department of the University of Delaware, he helped conceive and organize the major American research networks CSNET, NSFNet, and the National Research and Education Network (NREN). He helped create the NSF/DARPA-funded Gigabit Network Testbed Initiative and served as the Chairman of the Gigabit Testbed Coordinating Committee.\nDave subsequently was appointed Alfred Fitler Moore Professor of Telecommunication Systems at the University of Pennsylvania, where he also held appointments as Professor of Business and Public Policy at the Wharton School of Business and as a Faculty Associate of the Annenberg School for Communication. He served as Chief Technologist at the US Federal Communications Commission (2000–2001) while on leave from the university.\nHe also was a Distinguished Career Professor of Computer Science and Public Policy at the School of Computer Science, Heinz College, and Department of Engineering and Public Policy at Carnegie Mellon University.\nIn 2018, at the age of 83, Dave moved to Japan to become Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC). He loved teaching, and taught his final class on January 22, 2026.\nHe was a Fellow of the IEEE, ACM, and the AAAS.\nDave was a delightful person, part of the bedrock of the Internet, and a great friend to ISC over the course of decades of board membership. We will miss him.\nWe extend our deepest condolences to Dave\u0026rsquo;s family and friends.\n","permalink":"/blogs/david-farber-memoriam/","tags":["Open Source","Root Server System"],"title":"ISC recognizes \"grandfather of the Internet\" Dave Farber"},{"categories":null,"contents":"ISC\u0026rsquo;s staff members and board are proud to participate in many governance bodies and community initiatives.\nCENTR ISC is an affiliate member of CENTR, the Council of European National Top-Level Domain Registries.\nVicky Risk is ISC\u0026rsquo;s representative to CENTR.\nDNS-OARC OARC is the DNS Operations Analysis and Research Center, which \u0026ldquo;brings together key operators, implementers, and researchers on a trusted platform so they can coordinate responses to attacks and other concerns, share information and learn together.\u0026rdquo; In particular, OARC hosts a number of DNS-related meetings and workshops of interest to ISC. It also has a support role in general DNS operations - measuring the performance and load of key nameservers and publishing the results, providing analysis of DNS performance and post-mortems of attacks, and developing tools for DNS analysis.\nISC was instrumental in setting up OARC and is a \u0026ldquo;Gold\u0026rdquo; member.\nRay Bellis is on the Board of Directors, and Cathy Almond is on the Programme Committee.\nICANN RSSAC Jeff Osborn is the chair of the ICANN Root Server System Advisory Committee (RSSAC), which is the formal body that advises the ICANN Community and Board of Directors on issues surrounding the DNS Root Server System. It is made up of a representative and an alternate from each of the Root Server Operators; Rob Carolina is Jeff\u0026rsquo;s alternate from ISC. RSSAC also includes a representative from each of the IANA Functions Operator and the Root Zone Maintainer (RZM), as well as a liaison from each of the ICANN Security and Stability Advisory Committee (SSAC) and the Internet Architecture Board (IAB). RSSAC meets in person at ICANN meetings and frequently via teleconference.\nJeff and Rob, along with Ray Bellis, Dan Mahoney, and Eddy Winstead, are members of the RSSAC Caucus.\nISC participates as the Root Server Operator for F-Root.\nIANA The Internet Assigned Numbers Authority, an affiliate of ICANN, coordinates the domain names, number resources, and protocol assignments that keep the Internet functioning smoothly.\nOndřej Surý is a Trusted Community Representative for the Root Key Signing Ceremony, serving as a Recovery Key Share Holder.\nIETF ISC staff have contributed to IETF for years. We have written numerous IETF standards and reviewed many others.\nOndřej Surý is currently co-chair of the DNS Operations working group.\nNANOG NANOG is the North American Network Operators Group. NANOG holds two large networking conferences a year, plus a series of smaller, free training events. ISC is an occasional sponsor for these meetings.\nT. Marc Jones serves on NANOG\u0026rsquo;s Community Engagement committee.\nOSI ISC is an Affiliate member of the Open Source Initiative. OSI is an industry group promoting open source and the interests of open source publishers.\nVicky Risk is ISC\u0026rsquo;s liaison to OSI.\nRIPE Réseaux IP Européens (RIPE, French for \u0026ldquo;European IP Networks\u0026rdquo;) is the Regional Internet Registry (RIR) responsible for Europe and the Middle East. They also host a network operators\u0026rsquo; meeting twice a year that brings together much of the European networking community. ISC staff participate in the RIPE working groups, meetings, and mailing lists, as well as in various RIPE projects, such as the ATLAS project, hosting equipment in Palo Alto. RIPE is an RSO peer, as they operate K-Root.\nOndřej Surý is currently a RIPE Fellow Coach and one of the RIPE Arbiters.\n","permalink":"/governance/","tags":null,"title":"Internet Governance Activities"},{"categories":null,"contents":"ISC has a long history of contributing to, and implementing, open community standards. The documents that specify and discuss current and developing standards for the Internet are called Requests for Comments, or RFCs. That name belies their role, as an RFC that has been assigned the status \u0026ldquo;standard\u0026rdquo; is the primary specification of an Internet standard. Other status codes include \u0026ldquo;draft standard,\u0026rdquo; \u0026ldquo;informational,\u0026rdquo; \u0026ldquo;historic,\u0026rdquo; \u0026ldquo;proposed standard,\u0026rdquo; or \u0026ldquo;experimental.\u0026rdquo; Many of the early RFC documents have status \u0026ldquo;unknown\u0026rdquo; because they come from the long-gone era when an RFC really was just a request for comments.\nThe Internet Engineering Task Force (IETF) has ultimate responsibility for RFC documents, and maintains a complete list of them in their online RFC Directory.\nRFCs authored or co-authored by ISC affiliates ISC staff members and affiliates have written or contributed to over 90 RFCs, listed below.\nRFCs pertaining to DNS The Domain Name System protocols are more than 20 years old. Many of the older RFCs are obsolete, but there are still clients running software implementing the very oldest protocols.\nIn addition to the RFCs authored or co-authored by ISC contributors, you may find it interesting and/or useful to review all the IETF RFCs related to DNS, currently maintained by PowerDNS. FAQS.org also maintains a list of DNS Protocol Related Documents.\nRFCs pertaining to DHCP Because DHCP operates at several levels in the network and operating system stacks, the RFCs that pertain to it include some that do not even mention it. ISC has prepared a collection of DHCP references, which is included in every ISC DHCP distribution as doc/References.html.\nOne or more of the authors of each of these RFCs is or was affiliated with ISC in some capacity (e.g. staff member, consultant, director):\nRFC1036 Standard for Interchange of USENET Messages\nM.R. Horton, R. Adams\nRFC1486 An Experiment in Remote Printing\nM. Rose, C. Malamud\nRFC1528 Principles of Operation for the TPC.INT Subdomain: Remote Printing – Technical Procedures\nC. Malamud, M. Rose\nRFC1529 Principles of Operation for the TPC.INT Subdomain: Remote Printing – Administrative Policies\nC. Malamud, M. Rose\nRFC1530 Principles of Operation for the TPC.INT Subdomain: General Principles and Policy\nC. Malamud, M. Rose\nRFC1611 DNS Server MIB Extensions\nR. Austein, J. Saperia\nRFC1612 DNS Resolver MIB Extensions\nR. Austein, J. Saperia\nRFC1846 SMTP 521 Reply Code\nA. Durand, F. Dupont\nRFC1876 A Means for Expressing Location Information in the Domain Name System\nC. Davis, P. Vixie, T. Goodwin, I. Dickinson\nRFC1996 A Mechanism for Prompt Notification of Zone Changes (DNS NOTIFY)\nP. Vixie\nRFC2010 Operational Criteria for Root Name Servers\nB. Manning, P. Vixie\nRFC2052 A DNS RR for specifying the location of services (DNS SRV)\nA. Gulbrandsen, P. Vixie\nRFC2129 Toshiba\u0026rsquo;s Flow Attribute Notification Protocol (FANP) Specification\nK. Nagami, Y. Katsube, Y. Shobatake, A. Mogi, S. Matsuzawa, T. Jinmei, H. Esaki\nRFC2136 Dynamic Updates in the Domain Name System (DNS UPDATE)\nP. Vixie, S. Thomson, Y. Rekhter, J. Bound\nRFC2186 Internet Cache Protocol (ICP), version 2\nD. Wessels, K. Claffy\nRFC2187 Application of Internet Cache Protocol (ICP), version 2\nD. Wessels, K. Claffy\nRFC2308 Negative Caching of DNS Queries (DNS NCACHE)\nM. Andrews\nRFC2317 Classless IN-ADDR.ARPA delegation\nH. Eidnes, G. de Groot, P. Vixie\nRFC2545 Use of BGP-4 Multiprotocol Extensions for IPv6 Inter-Domain Routing\nP. Marques, F. Dupont\nRFC2576 Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework\nR. Frye, D. Levi, S. Routhier, B. Wijnen\nRFC2655 CIP Index Object Format for SOIF Objects\nT. Hardie, M. Bowman, D. Hardy, M. Schwartz, D. Wessels\nRFC2671 Extension Mechanisms for DNS (EDNS0)\nP. Vixie\nRFC2756 Hyper Text Caching Protocol (HTCP/0.0)\nP. Vixie, D. Wessels\nRFC2782 A DNS RR for specifying the location of services (DNS SRV)\nA. Gulbrandsen, P. Vixie, L. Esibov\nRFC2845 Secret Key Transaction Authentication for DNS (TSIG)\nP. Vixie, O. Guðmundsson, D. Eastlake 3rd, B. Wellington\nRFC2851 Textual Conventions for Internet Network Addresses\nM. Daniele, B. Haberman, S. Routhier, J. Schoenwaelder\nRFC3197 Applicability Statement for DNS MIB Extensions\nR. Austein\nRFC3291 Textual Conventions for Internet Network Addresses\nM. Daniele, B. Haberman, S. Routhier, J. Schoenwaelder\nRFC3364 Tradeoffs in Domain Name System (DNS) Support for Internet Protocol version 6 (IPv6)\nR. Austein\nRFC3542 Advanced Sockets Application Program Interface (API) for IPv6\nW. Stevens, M. Thomas, E. Nordmark, T. Jinmei\nRFC3582 Goals for IPv6 Site-Multihoming Architectures\nJ. Abley, B. Black, V. Gill\nRFC3584 Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework\nR. Frye, D. Levi, S. Routhier, B. Wijnen\nRFC3724 The Rise of the Middle and the Future of End-to-End: Reflections on the Evolution of the Internet Architecture\nJ. Kempf, R. Austein, IAB\nRFC3750 Unmanaged Networks IPv6 Transition Scenarios\nC. Huitema, R. Austein, S. Satapati, R. van der Pol\nRFC3776 Using IPsec to Protect Mobile IPv6 Signaling Between Mobile Nodes and Home Agents\nJ. Arkko, V. Devarapalli, F. Dupont\nRFC3833 Threat Analysis of the Domain Name System (DNS)\nD. Atkins, R. Austein\nRFC3865 A No Soliciting Simple Mail Transfer Protocol (SMTP) Service Extension\nC. Malamud\nRFC3904 Evaluation of IPv6 Transition Mechanisms for Unmanaged Networks\nC. Huitema, R. Austein, S. Satapati, R. van der Pol\nRFC4001 Textual Conventions for Internet Network Addresses\nM. Daniele, B. Haberman, S. Routhier, J. Schoenwaelder\nRFC4007 IPv6 Scoped Address Architecture\nS. Deering, B. Haberman, T. Jinmei, E. Nordmark, B. Zill\nRFC4012 Routing Policy Specification Language next generation (RPSLng)\nL. Blunk, J. Damas, F. Parent, A. Robachevsky\nRFC4033 DNS Security Introduction and Requirements\nR. Arends, R. Austein, M. Larson, D. Massey, S. Rose\nRFC4034 Resource Records for the DNS Security Extensions\nR. Arends, R. Austein, M. Larson, D. Massey, S. Rose\nRFC4035 Protocol Modifications for the DNS Security Extensions\nR. Arends, R. Austein, M. Larson, D. Massey, S. Rose\nRFC4071 Structure of the IETF Administrative Support Activity (IASA)\nR. Austein, B. Wijnen\nRFC4074 Common Misbehavior Against DNS Queries for IPv6 Addresses\nY. Morishita, T. Jinmei\nRFC4095 Attaching Meaning to Solicitation Class Keywords\nC. Malamud\nRFC4096 Policy-Mandated Labels Such as \u0026ldquo;Adv:\u0026rdquo; in Email Subject Headers Considered Ineffective At Best\nC. Malamud\nRFC4116 IPv4 Multihoming Practices and Limitations\nJ. Abley, K. Lindqvist, E. Davies, B. Black, V. Gill\nRFC4286 Multicast Router Discovery\nJ. Martin\nRFC4293 Management Information Base for the Internet Protocol (IP)\nS. Routhier\nRFC4431 The DNSSEC Lookaside Validation (DLV) DNS Resource Record\nM. Andrews, S. Weiler\nRFC4786 Operation of Anycast Services\nJ. Abley, K. Lindqvist\nRFC4843 An IPv6 Prefix for Overlay Routable Cryptographic Hash Identifiers (ORCHID)\nP. Nikander, J. Laganier, F. Dupont\nRFC4862 IPv6 Stateless Address Autoconfiguration\nS. Thomson, T. Narten, T. Jinmei\nRFC4877 Mobile IPv6 Operation with IKEv2 and the Revised IPsec Architecture\nV. Devarapalli, F. Dupont\nRFC4892 Requirements for a Mechanism Identifying a Name Server Instance\nS. Woolf, D. Conrad\nRFC5001 DNS Name Server Identifier (NSID) Option\nR. Austein\nRFC5071 Dynamic Host Configuration Protocol Options Used by PXELINUX\nD. Hankins\nRFC5095 Deprecation of Type 0 Routing Headers in IPv6\nJ. Abley, P. Savola, G. Neville-Neil\nRFC5186 IGMPv3/MLDv2 and Multicast Routing Protocol Interaction\nJ. Martin\nRFC5358 Preventing Use of Recursive Nameservers in Reflector Attacks\nJ. Damas, F. Neves\nRFC5507 Design Choices When Expanding the DNS\nP. Faltstrom, R. Austein, P. Koch\nRFC5625 DNS Proxy Implementation Guidelines\nR. Bellis\nRFC5905 Network Time Protocol Version 4\nJ. Martin\nRFC5966 DNS Transport over TCP - Implementation Requirements\nR. Bellis\nRFC6276 DHCPv6 Prefix Delegation for Network Mobility\nR. Droms, P. Thubert, F. Dupont, W. Haddad, C. Bernardos\nRFC6303 Locally Served DNS Zones\nM. Andrews\nRFC6334 Dynamic Host Configuration Protocol for IPv6 (DHCPv6) Option for Dual-Stack Lite\nD. Hankins, T. Mrugalski\nRFC6486 Manifests for the Resource Public Key Infrastructure (RPKI)\nR. Austein, G. Huston, S. Kent, M. Lepinski\nRFC6492 A Protocol for Provisioning Resource Certificates\nG. Huston, R. Loomans, B. Ellacott, R. Austein\nRFC6594 Use of the SHA-256 Algorithm with RSA, Digital Signature Algorithm (DSA), and Elliptic Curve DSA (ECDSA) in SSHFP Resource Records\nO. Surý\nRFC6781 DNSSEC Operational Practices, Version 2\nO. Kolkman, M. Mekking, R. Gieben\nRFC6887 Port Control Protocol\nD. Wing, Ed., S. Cheshire, M. Boucadair, R. Penno, P. Selkirk\nRFC6853 DHCPv6 Redundancy Deployment Considerations\nJ. Brzozowski, J. Tremblay, J. Chen, T. Mrugalski\nRFC6887 Port Control Protocol (PCP)\nD. Wing, S. Cheshire, M. Boucadair, R. Penno, P. Selkirk\nRFC6891 Extension Mechanisms for DNS (EDNS(0))\nP. Vixie, M Graff, J. Damas\nRFC6915 Flow Identity Extension for HTTP-Enabled Location Delivery (HELD)\nR. Bellis\nRFC7031 DHCPv6 Failover Requirements\nT. Mrugalski, K. Kinnear\nRFC7216 Location Information Server (LIS) Discovery\nM. Thomson, R. Bellis\nRFC7227 Guidelines for Creating New DHCPv6 Options\nD. Hankins, T. Mrugalski, M. Siodelski, S. Jiang, S. Krishnan\nRFC7314 Extension Mechanisms for DNS (EDNS) EXPIRE Option\nM. Andrews\nRFC7341 DHCPv4-over-DHCPv6 (DHCP 4o6) Transport\nQ. Sun, Y. Cui, M. Siodelski, S. Krishnan, I. Farrer\nRFC7343 An IPv6 Prefix for Overlay Routable Cryptographic Hash Identifiers Version 2 (ORCHIDv2)\nJ. Laganier, F. Dupont\nRFC7550 Issues and Recommendations with Multiple Stateful DHCPv6 Options\nO. Troan, B. Volz, M. Siodelski\nRFC7583 DNSSEC Key Rollover Timing Considerations\nS. Morris, J. Ihren, J. Dickinson, W. Mekking\nRFC7598 DHCPv6 Options for Configuration of Softwire Address and Port-Mapped Clients\nT. Mrugalski, O. Troan, I. Farrer. S. Perreault, W. Dec, C. Bao, L. Yeh, X. Deng\nRFC7766 DNS Transport over TCP - Implementation Requirements\nJ. Dickinson, S. Dickinson, R. Bellis, A. Mankin, D. Wessels\nRFC7793 Adding 100.64.0.0/10 Prefixes to the IPv4 Locally-Served DNS Zones Registry\nM. Andrews\nRFC7819 Privacy Considerations for DHCP\nS. Jiang, S. Krishnan, T. Mrugalski\nRFC7824 Privacy Considerations for DHCPv6\nS. Jiang, S. Krishnan, T. Mrugalski\nRFC7828 The edns-tcp-keepalive EDNS0 Option\nP. Wouters, J. Abley, S. Dickinson, R. Bellis\nRFC7844 Anonymity Profiles for DHCP Clients\nC. Huitema, S. Krishnan, T. Mrugalski\nRFC7873 Domain Name System (DNS) Cookies\nD. Eastlake 3rd, M. Andrews\nRFC7969 Customizing DHCP Configuration on the Basis of Network Topology\nT. Lemon, T. Mrugalski\nRFC8080 Edwards-Curve Digital Security Algorithm (EdDSA) for DNSSEC\nO. Surý, R. Edmonds\nRFC8156 DHCPv6 Failover Protocol\nT. Mrugalski, K. Kinnear\nRFC8415 Dynamic Host Configuration Protocol for IPv6 (DHCPv6)\nT. Mrugalski, M. Siodelski, B. Volz, A. Yourtchenko, M. Richardson, S. Jiang, T. Lemon, T. Winters\nRFC8482 Providing Minimal-Sized Responses to DNS Queries That Have QTYPE=ANY\nJ. Abley, O. Guðmundsson, M. Majkowski, E. Hunt\nRFC8490 DNS Stateful Operations\nR. Bellis, S. Cheshire, J. Dickinson, S. Dickinson, T. Lemon, T. Pusateri\nRFC8624 Algorithm Implementation Requirements and Usage Guidance for DNSSEC\nP. Wouters, O. Surý\nRFC8749 Moving DNSSEC Lookaside Validation (DLV) to Historic Status\nW. Mekking, D. Mahoney\nRFC8906 A Common Operational Problem in DNS Servers: Failure to Communicate\nM. Andrews, R. Bellis\nRFC8914 Extended DNS Errors\nW. Kumari, E. Hunt, R. Arends, W. Hardaker, D. Lawrence\nRFC8925 IPv6-Only Preferred Option for DHCPv4\nL. Colitti, J. Linkova, M. Richardson, T. Mrugalski\nRFC8945 Secret Key Transaction Authentication for DNS (TSIG)\nF. Dupont, S. Morris, P. Vixie, D. Eastlake 3rd, O. Guðmundsson, B. Wellington\nRFC8947 Link-Layer Address Assignment Mechanism for DHCPv6\nB. Volz, T. Mrugalski, C. Bernardos\nRFC9018 Interoperable Domain Name System (DNS) Server Cookies\nO. Surý, W. Toorop, D. Eastlake 3rd, M. Andrews\nRFC9108 YANG Types for DNS Classes and Resource Record Types\nL. Lhotka, P. Špaček\nRFC9432 DNS Catalog Zones\nP. van Dijk, L. Peltan, O. Surý, W. Toorop, C.R. Monshouwer, P. Thomassen, A. Sargsyan\nRFC9471 DNS Glue Requirements in Referral Responses\nM. Andrews, S. Huque, P. Wouters, D. Wessels\nRFC9527 DHCPv6 Options for the Homenet Naming Authority\nD. Migault, R. Weber, T. Mrugalski\nRFC9619 In the DNS, QDCOUNT Is (Usually) One\nR. Bellis, J. Abley\nRFC9915 Dynamic Host Configuration Protocol for IPv6 (DHCPv6)\nT. Mrugalski, B. Volz, M. Richardson, S. Jiang, T. Winters\n","permalink":"/rfcs/","tags":null,"title":"IETF Standards Written by ISC Contributors"},{"categories":["Compliance","Open Source","Company News"],"contents":"This post is a joint response to the European Open Digital Ecosystem Strategy Call for Evidence, from CZ.NIC, ISC, NLnet Labs, and NetDEF.\nWe, the undersigned organisations, are responsible for the development and maintenance of some of the most well-known and widely adopted Open Source Internet Infrastructure Software.\nCZ.NIC (www.nic.cz) is an association responsible for running the registry for the top level domain of the Czech Republic (.CZ). CZ.NIC also develops and maintains a set of open source software in the area of internet infrastructure like routing daemon BIRD and DNS servers Knot DNS and Knot Resolver.\nInternet Systems Consortium, Inc (www.isc.org) a not-for-profit company (operating under US IRC section 501(c)3) based in the US), is dedicated to developing software and offering services in support of the Internet infrastructure. ​​ISC is responsible for developing and distributing three widely-deployed open source Internet networking software systems: BIND9, ISC DHCP, and Kea DHCP, and operating one of the 13 root name server systems of the Internet.\nNLnet Labs (www.nlnetlabs.nl) is a Dutch, independent public benefit organisation founded in 1999. Its mission is to write open-source software and contribute to open standards for the Domain Name System and (safe) inter-domain routing, thereby improving the robustness, security and reliability of the Internet. NLnet Labs maintains widely used implementations for DNS including NSD and Unbound, and the safety of inter-domain routing including Krill and Routinator.\nNetwork Device Education Foundation (NetDEF - www.netdef.org) is a US 501(c)3 not-for-profit company registered in California. NetDEF hosts and maintains the public CI/CD infrastructure for FRR (the Free Range Routing project). NetDEF also ensures compliance, maintains, tests and develops code for most of the routing protocols in FRR. FRR is used widely as the routing stack on everything from home routing gateways, enterprise routers and data center infrastructure.\nThis submission provides evidence from the perspective of four non-profit organizations employing maintainers for long-lived open source projects in the public interest. Two of us are based in Europe, two of us are based in the US. All four employ EU-based maintainers. We respond together, reflecting that open source is a global movement, without borders. 1. What are the strengths and weaknesses of the EU open-source sector? What are the main barriers that hamper adoption and maintenance of high-quality and secure open source? The Internet’s two core pillars (the Domain Name System and IP routing) are technology areas where open source software is both invisible and dominant. Our software is used by operators of internet and telecommunication infrastructure, as well as national top level domain registries around the world.\nEU public and private sector undervalue their open source dependencies Since open source is freely-available, our primary challenge is funding software maintenance and quality assurance activities. We attempt to minimize costs by taking advantage of free services, while also trying to raise revenue, via charitable grants and sale of professional support services.\nThere remains a disconnect between the engineers that select the open source software we develop to operate their critical infrastructure and their organisation’s willingness to fund sustained development — the easiest mode of behavior for most companies is to be a free rider. In contrast to physical network equipment, deploying (gratis) open source software is less often backed by a budget. In particular, we perceive private-sector organisations in the EU to be likely to assume funding will be taken care of by others.\nWe suggest concrete measures corresponding to this observation in Section 3.2.\nMost open source development and distribution infrastructure is US-based Another structural weakness lies in the reliance on US-based technology companies. Every open source project requires development infrastructure; a shared software development environment that enables collaboration and a stable platform for version controls; a group/project communications system, such a chat or email, compute resources for building and testing the software; hosting resources, for publishing and storing the software and associated files. These resources cost money.\nMost low-cost or free infrastructure for open source development and distribution is operated by the big US tech giants Microsoft (Github), Amazon and Google. In most cases, there is no cost-equivalent alternative from an EU organization. Historically, it is difficult to start or grow a business that can effectively compete with these entrenched low-cost providers, but it is a weakness to have a sole source in the market for these infrastructure services.\nWe suggest concrete measures corresponding to this observation in Section 3.3.\nAvailable funding is short-term and feature-focused Funding for both services, and personnel must be stable, as abrupt changes in funding cause disruption and loss of momentum. Maintainer turnover can result in significant loss of intellectual capital for the project, and modifying systems to migrate from one development environment to another can consume scarce resources without contributing anything productive for the software users. For this reason, programs which award grants for a year, or for even shorter periods, are not the best way to support a stable long-term maintenance in the open source ecosystem. While we would love to see more grant funds, we would like to see more funding available for maintaining existing open source, not just new projects or new features, and this would be most useful if the grants cover a multi-year period. Ideally downstream users should be fully funding maintenance.\nWhen designing new approaches to funding open source, it is important to consider that open source organizations are development-heavy, and generally have limited resources to engage in contract negotiation or extended procurement cycles. While it is important to have accountability whenever public money is expended, the resulting work product is publicly visible in the case of open source development and maintenance. Therefore, it should be possible to minimize the process and contractual overhead required, to make funding more accessible to open source projects. Low-overhead approaches have been pioneered by the likes of NLnet and Sovereign Tech Agency using cascade funding models.\nWe suggest concrete measures corresponding to this observation in Section 3.2.\nWhat are the main barriers that hamper sustainable contributions to open-source communities? Sustaining maintainers is the bottleneck It is challenging for casual users to make useful technical contributions to large, complex software systems. Every contributed patch requires extensive review by the maintainers, and often requires extension, addition of tests, and documentation. Not infrequently, community-contributed patches cause problems in other areas of the code, or conflict with features the contributor isn’t using. For these reasons, we have found that our open source projects require a staff of dedicated or nearly dedicated core maintainers. Even if the user community is willing to contribute some development, there must be a skilled team of core maintainers, who will need some funding to sustain their work.\nAI technology will not replace human maintainers As of this writing, we have found LLM-based tooling to be more effective at wasting maintainer time (bogus vulnerability reports, nonsensical code contributions by well-intentioned individuals who do not understand what these algorithms produce), than at alleviating the bottleneck of core maintenance. Even if LLM tools for reviewing code improve to the point where they are significantly useful, open source projects will still need humans to direct the tools, interact with the user community, and to take responsibility for managing the projects.\nSustainable open source is less about attracting sporadic volunteer contributions and more about ensuring that the necessary coordination, review, and operational continuity can be maintained over time.\n2. What is the added value of open source for the public and private sectors? The Internet relies on open standards and open source implementations of these standards to function. We contributed to research by ICANN’s Security and Stability Advisory Committee (SSAC) titled “The Domain Name System Runs on Free and Open Source Software (FOSS)” that elaborates on this claim for the Domain Name System. The Internet’s routing system similarly runs on open source software. The BIRD routing daemon is used at internet exchanges all over the world to provide their route server function. 100s of millions of home routers on the planet use FRR (or previous incarnations Quagga or Zebra). All known relying party software for Route Origin Validation (ROV), a routing security technology that is seeing global adoption, is open source.1\n3. What concrete measures and actions may be taken at EU level to support the development and growth of the EU open-source sector and contribute to the EU’s technological sovereignty and cybersecurity agenda? 3.1 Digital Resilience Plan** Recent geopolitical and trade developments have demonstrated that Europe’s dependence on non‑EU digital infrastructure, including cloud services and software platforms, can quickly translate into systemic vulnerability. To safeguard the continuity of essential digital operations, the EU should establish a Digital Resilience Plan — analogous to the resilience planning obligations introduced for telecommunications under the upcoming Digital Networks Act (DNA).\nThis plan would define coordinated preparedness and response measures in the event that political or commercial actions by the United States or other third countries restrict European access to cloud, hosting, or software services. Any such disruption could severely affect core functions of the internal market, cybersecurity operations, and public digital services.\nThe Digital Resilience Plan should:\nMap the EU’s critical dependencies on non‑European digital infrastructure, cloud services, and software collaboration platforms. Define fallback strategies and operational continuity measures for sudden disruptions affecting US‑based digital services Support the growth, funding, and readiness of EU‑based open‑source software projects and platforms that can serve as interoperable and immediately deployable alternatives under stress conditions. Ensure that public investment and procurement mechanisms, including those linked to CRA and NIS2 implementation, prioritise open‑source components (co-)maintained within EU jurisdiction. Coordinate cross‑sector recovery planning between national cybersecurity authorities, digital infrastructure providers, and open‑source communities maintaining core Internet technologies. By formally integrating EU‑based open‑source software and neutral development infrastructure into the resilience framework, the European Commission would strengthen both technological sovereignty and supply chain security. The goal is not isolation, but preparedness — ensuring that Europe’s digital ecosystem remains operational, secure, and sustainable even under geopolitical stress.\n3.2 Use existing hooks in the CRA and its implementation** We call upon the EC to (1) leverage existing hooks in the CRA and its implementation: Expedite developing the voluntary security attestation programme into an instrument that can fund sustainable maintenance of open source software (2) Establish that ‘due diligence’ under Article 13(5) involves verifying that any integrated components are from ‘full-funded’ sustainable projects. (3) Execute the Union-wide dependency assessment (in ADCO CRA) to further increase visibility into the role of open source as the foundation of our current digital ecosystem Expedite developing the Voluntary security attestation programme Open source maintainers, or their project stewards, who invest in the development process and documentation improvements required to meet CRA requirements should be able to recoup all of their maintenance costs and some of their development costs. We ask the EC to expedite developing the voluntary attestation program for the CRA.\nLink ‘due diligence’ obligations to project sustainability In Section 1, we argued that sustaining maintainers is the bottleneck for sustainable contributions to open source software. We posited that sustainable open source is less about attracting sporadic volunteer contributions and more about ensuring that the necessary coordination, review, and operational continuity can be maintained over time.\nTo sustain organisations like ours, as well as individuals who would prefer to make open source maintenance their careers, funding is a necessity. The CRA can provide for a new system of incentives that provides a source of funding if and only if the ‘due diligence’ obligation connects directly to the aforementioned voluntary security attestations program.\nTo do so, we recommend the EC to ensure that there is a tight closed-loop expression of maintenance costs in the attestations (Article 25) with due diligence requirements (Article 13(5)) — Responsible parties inside Manufacturers should clearly appreciate that using components from underfunded open source projects should be expressly called out in the cybersecurity risk assessment (Article 13(2-4)) and does not represent ‘industry best practice’. Manufacturers should be assessed by Market Surveillance Authorities on whether they are using underfunded open source projects and whether they are engaging in community support or just free-ridership. A virtuous feedback loop should be expressly considered in developing the program.\nExecute the Union-wide dependency assessment The ADCO CRA should execute on its Art 13(25) ability to perform Union-wide dependency assessment on open source components. To make this at all possible, the result of the standardisation efforts at CEN/CENELEC corresponding to EU 2024/2847 (CRA), Annex A, part II, para 1 should be critically examined to meet the stated objectives in the law text.\nWithout a firm requirement on Manufacturers to produce SBOMs that include stable and globally unique identifiers that identify components and their specific versions, data quality will be problematic. There can be no such census and EU dependency on open source software will be underreported, remain politically undervalued and its potential as a crucial contributor to the EU’s technological sovereignty, security and competitiveness underutilized.\nWhile initiatives like Germany’s Sovereign Tech fund are promising and welcome, and should be scaled-up to EU-level,2 direct funding of all of the deserving open source projects is probably infeasible. With sufficient investment, and building upon existing open source census efforts, the CRA Union-wide dependency assessment should create a source of open data to prioritize investment by public and private sectors alike.\n3.3 Level the playing field for open source development** We call upon the EC to: (1) Set a procurement norm for IT projects involving open source components. Such norms should recognize community commitment and project engagement rather than solely lowest cost bids. (2) Fund EU-based services open source developers need Procurement norms should support open source development The EC should set a procurement norm for itself, the EU member states and the private sector to require a budget line item for all IT projects involving open source components targeted at sustaining upstream development communities. Such a norm would be in line with ENISA guidance on the NIS2 implementing act for the digital sector.3\nResponding to commercial RFPs and dealing with government contracts are outside the expertise and beyond the capabilities of many open source project teams.\nSystems integrators and consultants providing these services who contract to provide these services under publicly-funded contracts should be required to ensure that the actual maintainers are fully-funded and sustainable for the duration of the tender. If necessary prioritizing some portion of their revenues to fund the actual maintainers of that open source. This would better enable governments to adopt and successfully use open source at scale, while also providing some support to maintainers.\nFund EU-based services open source developers need Funding development infrastructure and services for open source can also stimulate the development of EU-based cloud services. Currently, open source developers rely heavily on free services from Microsoft (GitHub software version control, CI/CD and publishing), Amazon/AWS (free or reduced-cost computing, publication), and Google (email, storage and document collaboration). Providing European alternatives to these dominant providers will make the open source community more resilient, while developing the European tech sector.\nTo give a concrete example, the strategy could stimulate growth of the multiple small, alternative community-led development platforms that already exist for open source development services.4 It would strengthen digital resilience if the EU encouraged the growth of multiple smaller, interoperable hosting and collaboration platforms operated under European jurisdiction. Targeted funding or procurement preference could stimulate the ecosystem, ensuring that open source communities are not structurally reliant on a few commercial providers outside the EU.\n4. What technology areas should be prioritised and why? The technology areas of focus should be derived from a competent technical risk analysis, (e.g via the Digital Resilience Plan described in Section 3.1) and focus on areas where service description would lead to widespread infrastructure and/or security failures.\nProposed priority areas:\nNetworking applications, such as DNS, BGP, DHCP, and associated systems Open source networking stacks, and the Linux kernel itself. Maintenance of the most widely-used cryptography libraries, and on-going development of post-quantum cryptography libraries. Systems used for centralized administrative user authorization and access controls for network devices and critical applications. Open collaboration on Internet protocol development, including collaboration on development of solutions to newly-discovered security challenges to existing protocols. (This work cannot be simply delegated to the biggest, most well-funded technology companies, there should be funding for smaller open source organizations to participate as well.) 5. In what sectors could an increased use of open source lead to increased competitiveness and cyber resilience? /no response/\nhttps://rov-measurements.nlnetlabs.net/stats/\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nSee e.g. “Funding Europe’s Open Digital Infrastructure: The Economic, Legal, and Political Feasibility of an EU Sovereign Tech Fund (EU-STF)”\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nEuropean Union Agency for Cybersecurity (ENISA), NIS2 Technical Implementation Guidance (report, Publications Office of the European Union, 2025): “Consider supporting the communities developing and maintaining FOSS and invest in a mutually beneficial relationship with them. Where effective, this could involve relationships with the relevant OSS steward that ‘provid[es] support on a sustained basis for the development and ensures the viability of those products.’”\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nCodeberg e.V. is a registered non-profit association based in Berlin, Germany. SourceHut is a sole proprietorship based in Hoorn, the Netherlands.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/open-digital-ecosystem-response/","tags":null,"title":"Joint response to the European Open Digital Ecosystem Strategy"},{"categories":["BIND","DNS","Annual Reports"],"contents":"2025 Summary BIND 9 remains a very functional, reliable, and well-supported option for a self-hosted DNS system. A lot of sustained hard work goes into this open source project. I have attempted to summarize the major accomplishments of the team in 2025.\n2025 was another busy year for the BIND 9 development team. There were no changes in personnel. The development team consists of:\nMark Andrews, the original DNS protocol policeman; Artem Boldariev, who has been implementing encrypted transports support, DOH, and DOT; Evan Hunt, a long-time maintainer and great explainer who added zone templates and the synthetic records plugin; Matthijs Mekking, our DNSSEC specialist who has been making DNSSEC easier to deploy; Aydın Mercan, who has a special interest in cryptography; Alessio Podda, who specializes in performance engineering; Aram Sargsyan, who has recently been updating our catalog zone support; Colin Vidal, who is focused on improving configuration and management; and The DNS Development Director and team lead, Ondřej Surý, who is one of our most prolific developers. The QA team, led by Michał Kępień, uses automation wherever possible. Michał\u0026rsquo;s team includes:\nŠtěpán Balážik, who has been updating or re-implementing system tests; Andoni Duarte, who has been tinkering with our continuous integration system and doing release engineering; Nicki Křížek, original author of the dns-shotgun tool and frequent release manager; Michal Nowak, who quietly fixes many broken things; and Petr Špaček, who specializes in performance testing and protocol development. Development Focus - Modernization and Performance The 9.20 stable branch introduced a huge change to BIND, replacing the venerable Red-Black-Tree database (RBTDB) with a QP-trie database by default. Because of the risk associated with such a major change, the RBTDB was retained in 9.20 as a fallback option, accessible by explicit configuration or a compile-time flag. Since the initial release, we have not seen any significant issues related to this change, so the RBTDB will be removed entirely in the 2026 stable version, 9.22. In 2025 we promoted BIND 9.20 to ESV (Extended Support Version) status, as it has been a very solid branch.\nWe are continually trying to improve BIND performance, and 2025 included several more of these efforts. We improved on our existing least-recently-used cache-expiration mechanism by implementing a SIEVE LRU-based mechanism that triggers when the cache is close to the max-cache-size limit. This improved recursive server performance. (We will be posting some updated recursive performance tests in early 2026.) A recent blog post provides the results of comparative performance testing of BIND 9.18 vs 9.20 in authoritative applications. We found overall authoritative performance improved 4 - 7% for most profiles. We also found a 28% increase in memory usage for profiles with many zones, although this has been remediated in the 9.21 development branch, which will be reflected in 9.22 once that is released.\nAnother big change to the project in 2025 was the adoption of the Meson build system, replacing the old autotools. This required adjustments to our automated CI systems, but it has resulted in faster build times and better developer ergonomics.\nAlthough the team is mostly focused on refactoring and maintenance, we did add a surprising amount of new functionality in 2025.\nSignificant Features Added A new named-checkconf -e option prints the effective server configuration, including all the default settings, that would result from loading the specified configuration file into named. This has been a frequently requested feature for at least six years, and required other changes, such as the creation of a separate root-trust anchors config option (instead of bind-keys).\nA new plugin automatically synthesizes reverse PTR responses from IP addresses. It also works in \u0026ldquo;forward\u0026rdquo; mode, which builds synthesizing A/AAAA records. An allow-synth address-match list can be used to limit the network addresses for which the plugin may generate responses.\nTo simplify the configuration of multiple similar zones, BIND now supports a zone template mechanism. Template blocks containing zone options can be defined at the top level of the configuration file; they can then be referenced in zone statements. A zone referencing a template will use the options in the specified template block as defaults.\nWe have continued our campaign to improve zone updates. We fixed a bothersome issue where BIND could begin responding to queries before all the protective RPZ zones had loaded, potentially exposing users to malicious zones. We added a notify-defer option (delayed batching of NOTIFY messages) for catalog zones, specifically. We added the ability to detect and restart stalled zone transfers. We implemented the ZONEVERSION draft, and added new record types for HHIT and BRID.\nAfter implementing our Key and Signing Policy (KASP), we have made numerous further operational improvements to BIND\u0026rsquo;s DNSSEC support. We added a new option, manual-mode, to :any:dnssec-policy. When enabled, it will not automatically move to the next state transition, but instead the transition is logged. Only after manual confirmation, with rndc dnssec -step, is the transition made. Also known as a \u0026ldquo;dry run,\u0026rdquo; this feature will help lower risk during migration to the new dnssec-policy system.\nIn addition to the manual-mode option in dnssec-policy, a new option to named-checkconf can check if your current key-directory (DNSSEC key configuration) is in sync with the given dnssec-policy.\nWe added the new DSYNC record, a generalized use of the familiar notify function that is intended to help with parent-child communications to help maintain up-to-date delegations, including DNSSEC information. We added deprecation warnings for weak DNSSEC algorithms (see deprecated list, below).\nMany of these new features, as well as others, were also backported to the 9.18 \u0026ldquo;old-stable\u0026rdquo; version.\nDeprecated or Removed Features We remove features for various reasons: to simplify the code; because the feature has become a vector for a CVE; or when the industry has decided that a feature or encryption algorithm is insecure or otherwise a bad idea. We do try to advertise our plans ahead of time on the bind-users mailing list, unless the deprecation is necessary due to a CVE or change in the Internet Standards. This is a partial list of the features and options we deprecated in 2025:\nThe auto-dnssec configuration statement was completely removed (use dnssec-policy instead). Obsoleted DNSSEC statements include: dnskey-sig-validity, dnssec-dnskey-kskonly, dnssec-update-mode, sig-validity-interval, update-check-ksk, and dnssec-secure-to-insecure. The trusted-keys and managed-keys options were removed (use trust-anchors). TKEY Mode 2 (Diffie-Hellman) support was removed: the tkey-domain statement was obsoleted, and tkey-gssapi-credential was deprecated (use tkey-gssapi-keytab). The RSASHA1 and RSASHA1-NSEC3SHA1 algorithms, and DS digest type 1 (SHA1), were deprecated with warnings. The glue-cache option was removed (it is now permanently enabled). Theresolver-nonbackoff-tries and resolver-retry-interval statements were removed. The DSCP (Differentiated Services Code Point) feature was removed. The keep-response-order option was removed; fixed RRset ordering was removed because it could be abused. The sortlist option was removed. The delegation-only zone type and related statements were removed. Source-port specification in various -source statements were deprecated or removed. DNSRPS (DNS Response Policy Service) was removed. We originally added this for Farsight Security, which no longer exists, and to our knowledge nobody ever used it. This was a reportedly improved, but proprietary, implementation of RPZ. We patched and published eight new BIND 9 vulnerability notices in 2025. CVE-2025-8677: Resource exhaustion via malformed DNSKEY handling CVE-2025-40780: Cache poisoning due to weak PRNG CVE-2025-40778: Cache poisoning attacks with unsolicited RRs CVE-2025-40777: A possible assertion failure when using the \u0026lsquo;stale-answer-client-timeout 0\u0026rsquo; option CVE-2025-40776: Birthday Attack against Resolvers supporting ECS CVE-2025-40775: DNS message with invalid TSIG causes an assertion failure CVE-2024-12705: DNS-over-HTTPS implementation suffers from multiple issues under heavy query load CVE-2024-11187: Many records in the additional section cause CPU exhaustion BIND QA Efforts The QA team has been focused on automating the processes for preparing releases and evaluating performance results. They also continue to spend a significant amount of time assessing and attempting to reproduce incoming reports of potential security issues. Last but not least, they\u0026rsquo;ve been busy cleaning up the BIND 9 system test suite, rewriting shell-based tests to Python and preparing reusable building blocks for new tests, so that the latter are quick to write, reliable, and maintainable in the long term.\nIn 2025 the team tested, prepared, and packaged 34 open source releases, plus another 18 releases of the BIND -S edition.\nDNS Community Support ISC also contributes to the wider DNS and Internet community. We collaborate with other DNS developers and operators via the IETF and DNS OARC, and participate actively in the DNS-OARC industry Mattermost chat system.\nOndřej Surý is a co-chair of the IETF DNS Ops working group and presented four talks at RIPE Meetings in 2025. Ondřej is also currently a RIPE Fellow Coach and one of the RIPE Arbiters.\nRay Bellis, our Director of Operations, is treasurer and a member of the DNS-OARC Board of Directors, and Cathy Almond is chair of the Programme Committee.\nPetr Špaček is a frequent contributor to DNS-OARC, and a co-author and instigator of the radical new DELEG proposal in the IETF. Colin Vidal and Petr Špaček are participating in a small group effort to specify an open, common DNS API.\nMatthijs Mekking is organizing a BCP working group within the DNS-OARC community.\nJeff Osborn, ISC\u0026rsquo;s President, is the chair of the ICANN Root Server System Advisory Committee (RSSAC), which is the formal body that advises the ICANN Community and Board of Directors on issues surrounding the DNS Root Server System. It is made up of a representative and an alternate from each of the Root Server Operators; Rob Carolina, ISC\u0026rsquo;s General Counsel, is Jeff’s alternate from ISC. Jeff and Rob, along with Ray Bellis, Dan Mahoney, and Eddy Winstead, are members of the RSSAC Caucus.\nOur team members made contributions to various non-ISC open source projects in 2025, including:\nMichał Kępień:\nhttps://github.com/rr-debugger/rr/pull/3908 https://github.com/rr-debugger/rr/pull/4031 https://github.com/libgit2/pygit2/pull/1352 Nicki Křížek contributed to these projects:\nhttps://github.com/CZ-NIC/respdiff https://github.com/CZ-NIC/shotgun Petr Špaček also contributed to:\nhttps://github.com/CZ-NIC/respdiff Ondřej Surý:\nhttps://github.com/openssl/openssl/pull/28781 Throughout 2025 and every year, both the development team and our support engineers help customers of our professional support services, as well as open source users. The bind-users mailing list is still going strong and is an active and helpful resource for BIND users.\nISC Staff gave 14 public presentations on BIND or DNS in 2025. Streamlining the release process from the git - Ondřej Surý DNS Best Current Practices - Matthijs Mekking What does DELEG mean for DNSSEC? - Petr Špaček PQC for DNSSEC - Ondřej Surý What’s new in BIND 9.20 and what’s coming up next? - Colin Vidal SIEVE - An Eviction Algorithm Simpler than LRU for Web Caches - Ondřej Surý BIND 9 Fast(er) - Ondřej Surý DNSSEC Post-Quantum Crypto Algorithms in BIND - Ondřej Surý US EO Calls for Encrypting DNS - Vicky Risk Devil\u0026rsquo;s Advocacy for DoH - Aydın Mercan Realistic Benchmarking of DNS Resolver Cache Policies - Štěpán Balážik] Thinking about Serve Stale - Cathy Almond How to Make BIND 9 Fast(er) - Ondřej Surý Měření výkonu přenosu DNS zón (Measuring the Performance of DNS Zone Transfers) - Petr Špaček Our engineers added or significantly updated these knowledgebase articles in 2025: Operational Notice: The Impact of Stricter Glue Checking Changes to be aware of when moving from BIND 9.18 to 9.20 BIND Statement Fetches Per Zone BIND Block Options BIND Block Topmost Collecting client queries for DNS server testing Recognition Being an open source developer can require a thick skin, because everyone is a critic, but very few of the critics are willing to actually help. However, we are fortunate to have some users who have been a BIG help to the project in 2025. We would like to recognize:\nThe engineer at a Nordic ISP who gives us a PCAP capture every quarter, to feed our resolver test bed with realistic queries. This is data we have otherwise no direct access to, and it is crucial to our testing process. The researchers at several universities who have been remarkably successful in probing for potential vulnerabilities in BIND, and in the DNS in general. They are an evergreen resource for BIND CVEs. Bob Halley, author of DNS Python, which our QA team uses. The European Commission, who selected BIND 9 as one of the open source projects they funded for a bug bounty, administered by YesWeHack. The maintainers of Libuv and OpenSSL, who have accommodated our issue reports and pull requests. DNS-OARC, and the community collaborating on the DNS-OARC chat platform, particularly for coordinated response to protocol-level vulnerability reports, as well as for help tracking down and fixing interoperability issues in the worldwide DNS system. Our colleagues in the industry who have worked with us on the DELEG proposal in the IETF. This could make the worldwide DNS system more sustainable and operable in the future. Everyone who has taken the time to submit a complete, reproducible bug report or issue in our BIND 9 GitLab repository, and who has tested our fixes in their environment. ","permalink":"/blogs/2025-bind-report/","tags":["dns","results","BIND"],"title":"2025 BIND 9 Development Report"},{"categories":["Kea","Annual Reports","DHCP"],"contents":"Tomek Mrugalski, Director of DHCP Development, leads the combined Stork, Kea, and DHCP team.\nFour software engineers work on Kea and KeaMA/Stork:\nRazvan Becheriu Francis Dupont Thomas Markwalder Marcin Siodelski The Stork team includes three software engineers:\nSławek Figiel Piotrek Zadroga William Leuschner Włodek Wencel is our DHCP QA manager, and his team includes:\nAndrei Pavel - QA Engineer Marcin Godzina - QA Engineer Emanuel Petr - Systems Administrator and we have one open position! What Does the Team Do? The primary focus of the team is:\nFixing bugs in the Kea DHCP system Developing features in our Stork management tool Writing and running tests and analyzing the results Producing regular monthly development releases and annual new stable versions The team maintains 22 repositories on GitLab, including both our public and non-public work. There is a little activity surrounding our legacy support for ISC DHCP, which is EOL, but we still have a few customers we provide EOL support to. In 2025 we made a few improvements to our KeaMA migration utility for ISC DHCP users, which we maintain as an online service in addition to publishing the source code.\nThe team is particularly focused on addressing user and support subscriber requests. This includes:\nTriaging and discussing user-created issues on GitLab; Reviewing pull requests on GitHub and GitLab; and Responding to questions on our Kea and Stork user mailing lists The QA team specifically is engaged in:\nDesigning, writing, and maintaining system tests; Managing continuous integration testing in Jenkins and pipelines in GitLab; Release engineering for Kea and Stork; Building packages and Docker containers, conducting testing, reviewing test results, and assessing vulnerability reports; Managing multiple package repositories on Cloudsmith; Maintaining the Kea and Stork continuous integration testing system, which runs on six OSes, 14 different versions, and 2 different architectures; Wrangling \u0026gt;500k unit tests for Kea; Operating \u0026gt;30k system tests for Kea; and Running seven different code analyzers. In addition, the QA team handles release engineering, which requires building some 900 packages for all the components and operating systems we support.\nKea Team Achievements in 2025 We issued 14 Kea releases; three of them were security releases, addressing vulnerabilities in Kea.\nIn 2025, 473 new Kea issues were opened, and we closed 309 issues (mostly by fixing bugs or adding features). Of these, there were 42 issues opened on behalf of support customers, and we closed 55 issues for our customers. Overall the Kea project has had 3992 issues opened, has closed 3198 issues, and has 794 remaining open as of this writing.\nOngoing maintenance and development continued through the 2.7.x development stream in 2025. We expanded API command capabilities (e.g., returning shared network names in list output), added syslog support for advanced log hooks, and resolved lease handling edge cases with database backends. We transformed our database backend support into hooks, to reduce the code exposure for users not choosing to use MySQL/MariaDB or Postgres and to enable advanced users to implement their own hooks to support other databases if needed.\nIn June, we released Kea 3.0, our first long-term support version. This release marked a significant evolution beyond the 2.x stable series, with a planned three-year support lifecycle that gives enterprise users more stability and fewer upgrades compared with the previous annual release cadence. Kea 3.0 open-sourced many components that were previously commercial-only — ISC moved twelve hook libraries into the open source tree, making popular features like host reservation management and subnet configuration accessible without proprietary licenses. (The Configuration Backend and RBAC hook libraries remain under a commercial license and are shared only with ISC support customers.)\nKea skipped a 2.8 version entirely, to emphasize the extent of the changes and the need for operators to consult the detailed release notes to manage configuration adjustments during an upgrade. New installation packaging simplified deployment by consolidating hook libraries into the standard distribution, and major architectural improvements such as integrated HTTP/TLS API sockets replaced the legacy Control Agent for remote management, reducing complexity and opportunities for misconfiguration. Kea 3.0 included a major uplift to client classification, implementing option-class-tagging, which we expect will be very popular. The build system was modernized by migrating to Meson, which improves maintainability and speeds up development workflows. At the same time Kea continued to bake in security hardening with updated defaults, stricter file and socket handling, and security fixes in the 2.6.x series, which addressed multiple security vulnerabilities affecting API accessibility and file handling (listed below).\nKea and Security In May, we received an alarming new confidential issue, from Matthias Gerstner from the SUSE security team. He had done an extensive review of Kea installation and identified several vulnerabilities in Kea, including CVE-2025-32802: Insecure handling of file paths allows multiple local attacks, CVE-2025-32803: Insecure file permissions can result in confidential information leakage, and CVE-2025-32801: Loading a malicious hook library can lead to local privilege escalation. These were the first reported CVEs in Kea since 2019, and they were quite a wakeup call for us. In addition, Matthias pointed out other, less serious issues and made some recommendations for improvements in Kea\u0026rsquo;s integration with the operating system environment. We spent quite a bit of time reviewing the report and, of course, discussing approaches to better securing Kea.\nWe had two other CVEs in 2025:\nCVE-2025-40779: Kea crash upon interaction between specific client options and subnet selection CVE-2025-11232: Invalid characters cause assert Towards the end of the year, we underwent our first formal external security audit, by Ada Logics. (The final report is here.) The project included development of a threat model for Kea, manual code inspection, and analysis with a range of tools. Following the audit, the Kea team adopted the auditors\u0026rsquo; Google OSS-Fuzz project, for an ongoing boost in our fuzz-testing coverage, and we are looking at adopting some of the analyzers they recommended as well.\nTogether, these accomplishments in 2025 reflect a transition year for Kea: from an annually updated stable series to a long-term supported platform with broader open source access to advanced features, tighter security postures, and modernized build and remote management capabilities, while continuing incremental improvements in the development branch in response to user requests.\nKea 2026 Development Plans In 2026 we will continue to prioritize feature requests from our support customers and open source user base. We plan to implement Active Leasequery, the last major RFC feature that we don\u0026rsquo;t yet support in Kea. We will improve interface detection and investigate our options for better handling of high-traffic overflow situations. We are looking at integration with scalable industry-leading solutions for user authentication and authorization, possibly via Stork. We are working on a solution to the well-known Blast Radius protocol vulnerability. We will produce another short-lived (1-year) stable version for those users who need to deploy the latest features.\nAlthough we do not yet have user requests for this, we expect to implement SBOMs for Kea packages and establish SLSA (supply chain security) documentation, and otherwise prepare for the implementation of the EU Cybersecurity Regulation (CRA) in 2027.\nISC DHCP to Kea Migration support We are updating the KeaMA tool for migration from ISC DHCP to Kea, mostly to reflect features that were previously missing in Kea, that have since been added. We continue to maintain an online hosted version of KeaMA, which is used daily by anonymous users to translate ISC DHCP configuration files into Kea configurations. Unfortunately, the translated configuration still requires manual review and editing. ISC made a mistake with ISC DHCP way back in 2006: we implemented an \u0026ldquo;interim\u0026rdquo; TXT record, while waiting for the DHCID RR to be specified; the final specification became available a couple of months later, but since then, we (and our users) have been stuck with the legacy of this \u0026ldquo;interim\u0026rdquo; implementation in ISC DHCP. Kea does it properly by implementing only the standard DHCID, but ISC DHCP users adopting Kea are once again having to deal with the legacy implementation.\nStork 2025 Achievements We released eight versions of Stork, including a special release for one security vulnerability, addressed in Stork 2.3.1. 227 new issues were opened and 246 issues closed. (Of course, these issues ranged from minor documentation fixes to significant new development work.)\nThe development team drove two humongous refactoring projects in Stork. These were: – A major PrimeNG library upgrade (329 files, +9.3k lines of code, -9.3k lines of code). – The restructuring necessitated by the removal of the Control Agent from Kea. The Control Agent was terminating all the connections to Kea from Stork, so application communications had to be re-implemented. (322 files, +24kloc, -18.6kloc)\nSecurity fixes in the 2.6.x series addressed multiple security vulnerabilities affecting API accessibility and file handling:\nCVE-2025-8696 CVE-2024-44905 We also fixed a significant DB exhaustion bug.\n7A Security conducted an audit of Stork, specifically focused on penetration testing the application and APIs. They also reviewed Stork dependencies, generated an SBOM, and made recommendations for achieving SLSA compliance.\nMajor New Features Added Marcin Siodelski spent much of the year focusing on adding DNS support to Stork.\nDNS support added in 2025:\nThe zone list is retrieved from multiple servers, aggregated, and cached in Stork. The administrator can view zones, search and sort them, and view zone resource records. We have tested retrieving zones from PowerDNS and well as from BIND 9. Marcin attended the DNS hackathon in Stockholm (sponsored by Netnod, DNS-OARC, and the RIPE NCC), where he was able to advance Stork\u0026rsquo;s support for PowerDNS. Stork now has a BIND 9 configuration viewer, in addition to the zone viewer. We have expanded the Prometheus statistics exporter for BIND. Stork now provides information on the current Kea and Stork software versions in the UI, and alerts users if they are using an out-of-date or known-vulnerable version of our software. Our last release of 2025 added support for communicating with Kea 3.0, without relying on the Control Agent (which has been deprecated). We have begun work on adding a comprehensive view of all current leases, across all Kea servers. The design for Lease Tracking is available in Stork\u0026rsquo;s developer wiki on GitLab. One new feature that is a big improvement for usability, is the ability to sort the data displayed in table form.\n2026 Stork Development Plans We have ambitious plans for Stork in 2026. We hope to add standard Oauth authentication and authorization for Stork users. We want to add SBOM generation with Secure Lifecycle Software Artifacts (SLSA) documentation. We are already well underway on adding centralized DHCP lease tracking, but there is more work to do to make this feature deployable at scale. We will start adding BIND 9 configuration support, starting with zone provisioning. We need to continue monitoring our external dependencies, which have the potential to cause Stork vulnerabilities, while we extend and improve our QA coverage. We are updating the support for our Kea Migration Agent (KeaMA) tool, as there are still many users operating ISC DHCP.\nAuto-completion is among many user interface improvements in development.\nCommunity Support for Kea, Stork, and DHCP Our support team has been busy, supporting an increasing number of Kea support customers and writing and updating technical articles for our knowledgebase. In 2025 we added:\nThings to be aware of when upgrading to Kea 3.0 Installing the Kea Subscriber Hook Libraries from Sources - Kea 3.0 and later Kea Hub and Spoke Experimentation Kea Database Connection Resilience Limiting DHCP Decline And a trio of articles on High Availability strategies:\nKea HA Strategies Comparison Kea Shared Lease Database Quickstart Kea HA Quickstart Guide A few older documents were updated:\nKea High Availability vs ISC DHCP Failover Securing the MySQL Connection Miscellaneous other contributions:\nMarcin Siodelski published a new version of his open source DHCP diagnostic utility, Endure. Vicky Risk gave a talk on Money and Open-source Sustainability, which was about open-sourcing most of the Kea Premium hooks in Kea 3.0. Sławek Figiel made a contribution to the Sphinx project, which we use to generate our documentation. The Kea-users mailing list is quite active, with a number of users contributing solutions and advice.\nCommunity Acknowledgements We are grateful to the following Kea community contributors.\nA number of users have opened an issue or two, but a handful of users have made multiple Kea contributions this year, or were particularly helpful:\nMatthias Gerstner from the SUSE security team did us a big favor when he reviewed Kea as part of their packaging process. Carsten Strotmann has been our most prolific external contributor to Kea in the past year. Philip Prindeville Alexander Pankratov Dipesh Kayastha Chuck Zmudzinsk reported a bug, with a detailed description and a patch. Steinar H. Gunderson also submitted a patch (thank you!). Stork contributors:\nA user named \u0026ldquo;mikygee\u0026rdquo; has been testing Stork\u0026rsquo;s support for DNS, and reporting issues. Alex Hautequest made a couple of feature requests, and offered to help implement them! Rinse Kloek has been active in GitLab, and also gave a presentation at a NLNOG meeting about his experience migrating Delta Fiber from ISC DHCP to Kea. General DHCP Developments In the DHCP industry in general there were two major events in 2025:\n– First, Google implemented DHCPv6 in Android. Previously, Android relied on SLAAC for IPv6 address assignment, creating operational challenges for network operators who depend on DHCPv6 for centralized address management, policy enforcement, logging, and compliance. By adding DHCPv6 client support, Google closed a major ecosystem gap and enabled consistent IPv6 operations across mobile, enterprise, and service-provider networks. This change reinforced the practical importance of robust, scalable DHCPv6 servers like Kea.\n– Second, RFC 8415 (Dynamic Host Configuration Protocol for IPv6) was formally promoted to Internet Standard status by the IETF. This promotion signified that DHCPv6 is a mature, stable, and widely deployed protocol with proven interoperability. ISC\u0026rsquo;s two original Kea developers, Tomek Mrugalski and Marcin Siodelski, personally invested a lot of time in helping to author RFC 8415, consolidating prior specifications and updates into a single, huge (150-page), authoritative specification.\nAI Use in Kea/Stork Development Some team members hate it, some love it. The current approach is open-minded: use of AI is permitted. So far we have determined AI is not suited for nuanced protocol development. One developer tried to use duck.ai to generate an example of multi-relay DHCPv6 traffic for documentation and the result was unusable.\nWe have found that AI can be useful for handling tedious tasks. It can update unit tests or rewrite bits of code (e.g. shell script tests in Python). AI was somewhat helpful in fixing a wholesale problem with 3000+ tests failing following the Stork application restructuring. We have been using Cursor Bugbot for some reviews and bug detection, although so far we are unimpressed with it.\nWe are exploring using AI tools in areas where acquiring competence is not essential, such as sporadic changes to the KeaMA web interface. We have experimented with using AI as one of several inputs when evaluating designs and choosing libraries. (Parts of this blog post were initially drafted by AI.) We have also suffered from a deluge of AI applications for the two jobs we posted in 2025. Trying to figure out which applications were \u0026ldquo;real\u0026rdquo; added a lot of overhead to the evaluation process, and we even had one candidate interview over Zoom that we are pretty sure was an AI-generated avatar. That was one visit to the uncanny valley that I hope not to repeat!\n2026 is promising ISC took a big commercial risk, open-sourcing so much of the formerly commercially-licensed Kea hook libraries in 2025. The financial fallout from this, so far, has been negligible. The project is currently well-supported with 92 support subscribers, a number which has been growing steadily. With the support of a generous grant, we were able to add two staff to the Stork project in 2025, our first additions in several years. We are excited about our plans for 2026, which will bring significant new DNS and DHCP management features in Stork, as well as a new stable version of Kea, and an updated KeaMA migration tool.\n","permalink":"/blogs/2025-dhcp-report/","tags":["dhcp","results"],"title":"2025 Stork, Kea, and DHCP Development Report"},{"categories":null,"contents":"In addition to verifying the integrity of your zone data, the DNSSEC chain of trust can also be used to associate other information with your domain, such as PGP keys, to help improve the security of applications.\nWhat is DNSSEC? DNSSEC (Domain Name System Security Extensions) adds resource records and message header bits which can be used to verify that the requested data matches what the zone administrator put in the zone and has not been altered in transit. DNSSEC doesn\u0026rsquo;t provide a secure tunnel; it doesn\u0026rsquo;t encrypt or hide DNS data. It was designed with backwards compatibility in mind. The original standard DNS protocol continues to work the same.\nThe resource record types are: RRSIG (for digital signature), DNSKEY (the public key), DS (Delegation Signer), and NSEC (pointer to next secure record). The message header bits are: AD (for authenticated data) and CD (checking disabled). A DNSSEC validating resolver uses these records and public key (asymmetric) cryptography to prove the integrity of the DNS data. A private key (specific to a zone) is used to encrypt a hash of a set of resource records; this is the digital signature stored in a RRSIG record.\nThe corresponding public key is stored in the DNSKEY resource record. The validating resolver uses that DNSKEY to decrypt the RRSIG and then compares the result with the hash of the corresponding resource record set to verify it has not changed. A hash of the public DNSKEY is stored in a DS record. This is stored in the parent zone. The validating resolver retrieves from the parent the DS record and its corresponding signature (RRSIG) and public key (DNSKEY); a hash of that public key is available from its parent. This becomes a chain of trust, also called an authentication chain. The validating resolver is configured with a trust anchor; this is the starting point which refers to a signed zone. The trust anchor is a DNSKEY or DS record and should be securely retrieved from a trusted source (not using DNS).\nAll the names in the zone have corresponding NSEC records listed in order, creating a chain of all the signed record sets. (Corresponding RRSIG records are also created to verify the NSEC data.) Because there is no gap, NSEC records are used to provide proof of non-existence of an resource record or to authenticate negative replies.\nWhile the basic theory behind DNSSEC hasn\u0026rsquo;t changed in over a decade, the implementation in BIND 9 has changed quite a bit in recent years, as we have implemented a modern Key and Signing Policy (KASP) function, to simplify key and signature management.\nCurrent on-line documentation on DNSSEC in BIND Preparing for DNSSEC Use resolvers that are DNSSEC-capable and configured to do the validation. All versions of BIND 9 are DNSSEC-capable. Make sure network devices don\u0026rsquo;t lose or stop EDNS0 (Extension Mechanisms for DNS) or squash DNSSEC-related traffic. DNSSEC requires EDNS0 to support the larger DNS message sizes and for the DNSSEC OK (DO) EDNS header bit. DNSSEC does increase DNS traffic, with more requests and larger responses. For high-volume DNS traffic, prepare for increased bandwidth needs. DNSSEC is more sensitive to time issues (i.e. system clocks being really out of sync) than plain DNS; make sure your system clocks are reasonably accurate. If hosting DNSSEC-signed zones, make sure your secondaries also support it and have DNSSEC enabled. DNSSEC presentations Presentation on the Encrypted DNS call DNSSEC Multi-Signer Model - Matthijs Mekking Presentation at DNS-OARC41 Shorter DNSSEC Outages - Petr Špaček Presentation at ICANN 77 - DS Automation in BIND 9 - Matthijs Mekking Presentation at RIPE 86 DNSSEC Multi-signer Model in BIND - Matthijs Mekking Presentation at DNS-OARC Performance effects of DNSSEC validation on a busy resolver - Petr Špaček Guides and training ISC KB: DNSSEC Key and Signing Policy ISC KB: DNSSEC Key States DNSVIZ remains the single most useful diagnostic tool for DNSSEC The BIND 9 Administrator Reference Manual (ARM) on ReadTheDocs. Choose the version of BIND 9 that you are running or plan to run and learn about the DNSSEC Key Manager utility, dnssec-keygen, dnssec-signzone, and other DNSSEC features built into BIND 9. ISOC 360 DNSSEC page DELV, a BIND tool that checks the DNSSEC validation chain Older, possibly still useful references\n10-part 2020 webinar series on implementing DNSSEC with BIND, covering theory, configuration, operation and troubleshooting. Note that this pre-dated the current Key and Signing Policy feature (KASP) The BIND 9 DNSSEC Guide, now an appendix to the BIND Administrative Reference Guide, is a long-format guide that explains a lot of the background and the \u0026ldquo;why\u0026rdquo; of DNSSEC, as well as the \u0026ldquo;how.\u0026rdquo; It includes the KASP (Key and Signing Policy) tool. State of DNSSEC Deployment 2016 (report from the Internet Society) \u0026ldquo;Deploying DNSSEC with BIND 9.7\u0026rdquo; given by Alan Clegg at NANOG50 DNSSEC goes Mainstream: Deployment Incentives, Experience, Questions, a presentation by Suzanne Woolf at NANOG, June 2009. ","permalink":"/dnssec/","tags":null,"title":"DNSSEC and BIND 9"},{"categories":["Company News","ISC"],"contents":"Computers are great, but sometimes it\u0026rsquo;s nice to go \u0026ldquo;old-school\u0026rdquo; with some handwritten messages. So please enjoy our best wishes to you!\nWe hope our customers, partners, and users have a wonderful holiday season and a great year in 2026. Thank you for your ongoing support of open source software for an open Internet!\n","permalink":"/blogs/2025-happy-holidays-from-isc/","tags":["holidays","2025","ISC"],"title":"Happy holidays from ISC!"},{"categories":["ISC","Open Source"],"contents":"We have done quite a few of these profiles now, but fortunately we have not yet run out of staff members! Let\u0026rsquo;s learn a little more about one of our senior software engineers, Razvan Becheriu.\nRazvan has been with ISC for almost seven years, after previously working in the gaming, finance, and computer networking industries. He got his first computer at age 17 and first connected to the Internet when he was 18, but he\u0026rsquo;s come a long way from his start writing code on paper! One of his former jobs involved customizing Kea DHCP for a major client, which introduced Razvan to our software. He now designs and implements new features in Kea, plus debugs and fixes issues, writes documentation, and monitors test suites and performance results.\nHe says that the most interesting things about working at ISC are \u0026ldquo;coming up with novel ideas to fix difficult issues, but also doing routine work like investigating crashes and core dumps, multi-threading races and related issues, and resolving memory leaks caused by reference loops.\u0026rdquo; Since ISC projects are maintained for multiple systems, he appreciates the chance to work with both portable and platform-specific code.\nShowing his typical sense of humor, Razvan says he wanted to work at ISC for the \u0026ldquo;fame and glory\u0026rdquo; of contributing to open source. \u0026ldquo;I learned that the DHCP server and client were developed by ISC and they run virtually everywhere, and I wanted to have my name in the AUTHORS file alongside all those professional people.\u0026rdquo;\nRazvan loves working at ISC because \u0026ldquo;it offers me a lot of freedom. Working remotely has allowed me to work from a lot of different places, and now it lets me spend time with my kids every day.\u0026rdquo; He adds, \u0026ldquo;I get to do C++ programming, which I love. I like that the Kea project is still fairly new and uses new coding standards, design patterns, and architectural choices. Also, the team is very diverse, which I love because this allows us to see each task from different perspectives.\u0026rdquo;\nRazvan\u0026rsquo;s passions include computers, computer games, science, and technology. He also enjoys traveling and getting to discover new places and people. He has visited Australia, New Zealand, Cambodia, Vietnam, Egypt, India, and many other countries around Europe; he loves the Greek islands and the Mediterranean Sea, and he likes lots of Asian cuisines. He says, \u0026ldquo;I think the best thing I bring to ISC is a different way of thinking. I am hard-working and relentless in discovering the cause of obscure issues.\u0026rdquo;\nHe was born and raised in Bucharest, but now lives in Brasov (Romania) with his wife and two children - and his beloved cat, Karmi. You\u0026rsquo;ll often find him talking to his twin brother about computer science or letting off steam by playing Hitman on his XBox.\nRazvan has a degree as an automation and computer science engineer from the Polytechnic University of Bucharest, but he says, \u0026ldquo;Unfortunately, that doesn\u0026rsquo;t help me understand all the different ways my toddlers can break things!\u0026rdquo;\nThank you for reading this profile; we hope that sharing a little information about our staff members helps strengthen our connection to our customers and users. We welcome your feedback at marketing@isc.org, and thank you for using ISC’s software!\n","permalink":"/blogs/isc-profile-becheriu/","tags":["Open Source","Profiles"],"title":"Meet an ISC Staff Member – Razvan Becheriu!"},{"categories":["BIND","Performance"],"contents":"BIND 9.20 is well established and stable. This is a good time to look at the performance of BIND 9.20 vs 9.18 as many users have not yet adopted 9.20. The bottom line is, BIND 9 authoritative server performance is remarkably stable, and authoritative users updating from 9.18 to 9.20 should anticipate a small performance improvement. Some users, with a large number of zones, may see a significant increase in memory usage, however, as explained below. There are a number of other changes to be aware of when moving from BIND 9.18 to 9.20, which are explained in this Knowledgebase article.\nAs part of the development cycle of BIND 9, we carry out a set of performance tests. Our performance laboratory system for DNS authoritative testing (somewhat unimaginatively called \u0026ldquo;perflab\u0026rdquo;) continually cycles through all configurations registered with it, building BIND 9 and running performance tests. (Details of the system have been presented elsewhere, so they will not be described here.) In this way, as the development branches are updated with changes, the effect of those changes on performance is quickly visible. We use a completely different methodology for testing DNS resolver operations, because that is impacted by factors on the live Internet, and tests with canned queries are a poor predictor of real-world performance. Resolver performance results will be presented in a separate blog post.\nWe have six test profiles:\nOne million zones: These tests simulate the environment of a web hosting provider, whose server might be serving zones for a number of customers. Each of the zones in the test is very small, comprising just an SOA, NS, A, and AAAA record. Note that in this scenario, there is a significant 28% increase in memory use in 9.20. We have prioritized performance and stability, vs memory usage. One million delegations: This configuration simulates a large top- (or second-level) domain, where the zone comprises almost exclusively delegations to the zones of subdomain owners. The se-TLD zone. This is a download of the actual .se zone, with all the DNSSEC records stripped out. (It is a copy of the TLD tab from this benchmark from CZ.NIC: https://www.knot-dns.cz/benchmark/). The se-TLD-nsec zone, which is the actual .se zone, signed with NSEC (not NSEC3), queried with the names of the zones with a random suffix appended, a bit like a pseudorandom subdomain attack. One million resource records: This is the opposite extreme to the one million delegations test. In that test, BIND returns referrals in response to queries; in this test BIND returns resource records from within the zone. Root zone: As some of the twelve root server operators run BIND on at least some of their servers, and given the crucial importance of the root zone to the global DNS infrastructure, we test the performance of BIND when serving a copy of the root zone. For each scenario, the query set is one that will result in about 5% NXDOMAIN responses (with the exception of the root zone test, where we would expect a much higher rate of NXDOMAINs).\nTo obtain a figure for the query rate that BIND can handle, perflab starts up the server under test. It then fires queries at the server for 30 seconds and calculates a query rate. It runs the 30-second measurement 10 times for a single test. The first measurement is discarded from calculations, to allow the server to settle down, i.e., for all data structures to be created and reach their final size. In order to ensure the stability of the results, we repeat this procedure around 50 times. In the table below we report the median across the 50 tests.\nWe also measure memory usage. For those scenarios where it is relevant, we measured memory usage when the server first starts up, when the glue cache is empty. (This is obviously not representative of a real operational scenario, but a baseline.) The second measurement is actually the maximum memory usage, after the glue cache is presumably fully populated.\nTest Results BIND 9.20 performed 4 - 7% better than 9.18 in these tests for most profiles. The gain was greatest in the 1M delegations profile, and least for the root zone and 1M zones profile. Memory usage was relatively flat, with two notable exceptions: BIND 9.20 used 28% more memory in the 1M zones test and 35% less memory in the se-TLD zone test. The increased memory usage with many small zones is due to changes we made to improve scalability.\nBoth 9.18 and 9.20 use a per-zone pool of locks to ensure that the server can keep serving a zone while a zone transfer is in progress. For 9.20, we increased the size of the lock pool and introduced a new, higher performance kind of lock. This leads to increased performance across the board, but incurs a fixed memory overhead per-zone. On deployments with a large number of small zones, this overhead adds up. In 9.21, we switched to a global pool of locks, instead of a per-zone one. This will resolve the memory issues for deployments with a large number of zones with the release of our next stable version, BIND 9.22.\nThe decrease in memory usage in the se-TLD test is due to optimizations done to the glue cache in 9.20.\nAuthoritative performance (queries per second) of BIND 9.18.42 vs BIND 9.20.16 Here is the data in table form. Performance (QPS) Absolute Performance in Queries Per Second\ndataset v9.18.42 v9.20.16 1M RRs 718,015.55 qps 755,373.35 qps 1M delegations 699,260.55 qps 746,582.25 qps 1M zones 807,216.10 qps 811,851.70 qps root zone 352,215.00 qps 330,801.10 qps se-TLD 653,827.20 qps 684,612.75 qps se-TLD-nsec 312,074.45 qps 324,450.55 qps Percentage Increase in QPS of 9.20 vs 9.18\ndataset v9.18.42 v9.20.16 1M RRs 0.00 % 5.20 % 1M delegations 0.00 % 6.77 % 1M zones 0.00 % 0.57 % root zone 0.00 % -6.08 % se-TLD 0.00 % 4.71 % se-TLD-nsec 0.00 % 3.97 % Memory usage (MB) Absolute Memory Usage\ndataset glue_cache v9.18.42 v9.20.16 1M RRs x 645.50 MB 630.40 MB 1M delegations empty 489.65 MB 476.80 MB 1M delegations full 552.45 MB 554.70 MB 1M zones x 9503.35 MB 12180.50 MB se-TLD empty 795.70 MB 823.00 MB se-TLD full 2443.70 MB 1571.30 MB se-TLD-nsec x 1820.15 MB 1910.20 MB Percentage Increase in Memory Used in 9.20 vs 9.18\ndataset glue_cache v9.18.42 v9.20.16 1M RRs x 0.00 % -2.34 % 1M delegations empty 0.00 % -2.62 % 1M delegations full 0.00 % 0.41 % 1M zones x 0.00 % 28.17 % se-TLD empty 0.00 % 3.43 % se-TLD full 0.00 % -35.70 % se-TLD-nsec x 0.00 % 4.95 % We hope that this information helps alleviate concerns about a performance regression from the list of considerations when planning an update to BIND 9.20.\n","permalink":"/blogs/bind-auth-performance-december-2025/","tags":["BIND","performance"],"title":"Performance Improvements in BIND 9.20 in Authoritative Configurations - December 2025"},{"categories":null,"contents":"Internet Systems Consortium Statement of Privacy Effective date: 11 September 2025\nISC’s Commitment to Responsible \u0026amp; Ethical Privacy Practices We are committed to the highest degree of respect for the privacy of our users, visitors to our websites and attendees of our events. In this Privacy Statement, we use “we,” “us” or “our” to refer to ISC and “you” or “your” to refer to you, the user, subscriber or visitor to our web sites. For certain purposes of this Privacy Statement, we distinguish between individuals who choose to subscribe and participate on our user mailing lists, fora, or our open code collaboration sites (“Subscribers”) and users of our websites (the “Sites”) who simply visit the Sites (“Visitors”) and do not choose to register as Subscribers. Unless indicated otherwise, all provisions of this Privacy Statement apply to Visitors and Subscribers. This Privacy Statement applies to all of our Sites.\nBy accessing and using any of the Sites as a Visitor, you expressly and knowingly consent to the information collection and use practices as described in this Privacy Statement.\nOur commitment to your privacy is based on the following principles which we apply to our use of both your personally identifiable data (“Personal Data”) and to certain anonymous information we collect when you visit our Sites (“Technical Information,” and together with Personal Data, “Your Data”):\nWe will describe Your Data which we collect; We will inform you clearly about our collection and use of Your Data; We will not sell or rent Your Personal Data to others; We will endeavor to be completely transparent and open about our data privacy policies and practices. Privacy Policy Sections How Do We Collect Information? What Information Do We Collect? How Do We Use Your Data Collected at Our Sites? Can I Choose Not to Receive Commercial Email Communications? Credit Card Information Use of Cookies; Technical Information Collection Data Security Cross-Border Transfers Links Accessing and Updating Personal Data Data Retention Social Media, Blogs \u0026amp; Discussion Groups Compliance Control of Your Personal Information (\u0026ldquo;Do Not Track\u0026rdquo; Notice) Children / Minors Your California Privacy Rights California Website Data Collection Contact Us Subscriber Personal Data, Communication \u0026amp; Database Changes to This Privacy Statement Acknowledgements How Do We Collect Information? We collect Your Data in the following basic ways:\nYou give it to us when you register as a Subscriber, or if a Subscriber or a Visitor registers for an event including but not limited to webinars, signing up for an ISC mailing list, or making a comment on a blog or social media; You give it to us in email inquiries or in your public comments on our mailing lists or social media sites; We automatically collect Technical Information when you visit our Sites; We sometimes purchase commercially available lists for marketing purposes. What Information Do We Collect? On the Sites, we request certain Personal Data, for purposes such as registering to become a Subscriber, renewing your membership in our lists, participating in discussion groups, responding to surveys, submitting inquiries and comments, requesting a quote for services, submitting code patches, making feature requests or logging technical issues, or registering for a webinar. This may include name, title, company/organization name, postal address, email address, work, home and mobile phone numbers, plus other relevant information you provide. See our Frequently Asked Questions (FAQ) list for specific data elements we may request and collect. We or our authorized vendors may collect Technical Information that we do not associate with any individual Site user. This information includes – how many visits we have to the Sites, when those Sites are visited, browser types used for Site visits, name of the Internet service providers, the Internet Protocol (IP) address through which you access the Internet, pages that you access while at one of the Sites, Software you downloaded from any of the Sites, How you found the Site (e.g. via search, Google Ads or referral from another web site), and the Internet address of the website from which you linked directly to one of the Sites. How Do We Use Your Data Collected at Our Sites? We do not sell or rent any Personal Data supplied by you. We work with other companies, consultants, and contractors to provide services on our behalf, such as Site and service hosting, survey hosting, audio and video conferencing, public relations, order processing and fulfillment, and mailing, sending information including but not limited to our software, services, and events. We will provide those companies only the Personal Data they need to perform the service for which they are retained and they agree to treat information confidentially and to only use it for the purposes of providing services under our agreement with them. We may use Your Data to provide you with more effective customer service and to improve the Sites and any related products or services we may provide or make available. We may use your Personal Data to provide you with updates on your list subscription or issues you have followed, submitted or commented on. If you contribute to our open source or significantly help us in identifying and fixing an issue, we may identify and credit you publicly, in our source code logs, release notes, social media, and blog posts. We do typically contact you and ask how you would like to be identified, but in some cases your information will appear automatically (e.g. in a commit lot or issue tracker). We may use Technical Information to periodically analyze Site logs to assess aggregate usage trends in order to better serve the needs of Visitors and Subscribers and maximize the user viewing experience. Under some circumstances this information may be used for purposes of systems administration, fraud prevention, or server troubleshooting and security. This information may also be used to help improve the Sites, analyze trends, and administer the Sites. We may use data about software downloads in aggregate form to determine usage and upgrade patterns and trends. We may disclose Your Data if required to do so by law or in the good-faith belief that such action is necessary to (a) conform to the requirements of the law or comply with legal process served on us or the Sites; (b) protect and defend our rights or property, or (c) act in urgent circumstances to protect the personal safety of our employees and staff, agents, users of our products or services, or members of the public. To the extent we are legally permitted to do so, we will take reasonable steps to notify you in the event that we are required to provide Your Data to third parties as part of legal process. In addition, we will use Your Data to:\nProvide information or a service requested or consented to by you. Assist in the performance of our activities and public interest functions. Comply with relevant contractual obligations with you and other third parties. Improve Site performance and content, including troubleshooting and diagnostics. Improve your engagement and interaction with other Subscribers of our community, including notifying you of network compatibility, possible abuse, and service interruptions that we believe you may want to address. Improve our engagement and interaction with you. Facilitate your attendance at and participation in our events, communities, open source projects, or blogs. Confirm your identity. Process a request or payment submitted to us. Comply with legal requests. Can I Choose Not to Receive Commercial Email Communications? We use unsolicited bulk email for marketing purposes very infrequently because we are aware of the problems and annoyance caused by spam email. Every bulk email we send to you contains an easily discoverable link that will allow you to unsubscribe and stop all subsequent unsolicited commercial or marketing messages from ISC delivered via our marketing mailing list. Unless you subsequently consent (opt in) to the receipt of commercial or marketing emails, we will not use your email address for such purposes. Please note that the public mailing lists for open source support and discussion do occasionally include messages that may be construed as “marketing,” but those lists are not primarily maintained for marketing purposes, and you will never get email via those lists unless you explicitly subscribed. If you need help in unsubscribing from any ISC mailing list, please email our sales team at info@isc.org.\nSubscribers to our mailing lists will receive email from us with information about us, upcoming events, or issues related to the Internet, as well as posts by other Subscribers. Subscribers to updates from our various collaboration sites, including Github, gitlab.isc.org, and others will also receive updates to issues they create or subscribe to.\nWe do not engage in targeted marketing activities including but not limited to device finger printing, profiling, and/or cross-device tracking.\nCredit Card Information Credit card information is not collected or stored on our servers. When you conduct transactions through a Site, payment and payment card information for transactions with us is entered directly into a third-party processor’s systems and is not transmitted through or stored by us. The card processor provides us with an authorization code which is securely stored with the payment record on our servers. In rare cases, when you explicitly request that we directly process a credit card transaction on your behalf, and you personally transmit that information to an ISC employee, we may enter it into a third-party system for you with your explicit consent.\nUse of Cookies; Technical Information Collection Our Sites use third parties for web analytics services to collect Technical Information. See our FAQ for a current listing of such third parties. These third parties do or may use “cookies” or similar technologies, which are text files placed on your computer, to help analyze how you use a Site. The information generated by the cookie about your use of a Site (including your IP address) will be transmitted to and stored by these service providers’ servers. They will use this information for the purpose of evaluating your use of a Site, compiling reports on website activity for website operators, and providing other services relating to website activity and Internet usage. They may also transfer this information to third parties where required to do so by law, or where such third parties process the information on their behalf. You may refuse and block the use of all of our (and third-party) cookies by selecting the appropriate settings on your browser.\nData Security We take reasonable and appropriate measures designed to protect the security of data transmitted to us upon receipt. ISC is a strong advocate of privacy-enhancing technologies. By default, the Sites encrypt connections between client devices and our servers to minimize the ability of any third party to “eavesdrop” on Your Data.\nOur DNS entries are signed using DNSSEC, so that if you use a DNS resolver that validates DNSSEC signatures, you can be assured that the site you are reaching is in fact published by ISC.\nOur databases and system administration logs are restricted to access by authorized and authenticated users. We use reasonable industry security standard safeguards to protect against the unauthorized disclosure of Personal Data. We take reasonable and appropriate security measures to protect against unauthorized access, disclosure, alteration, or destruction of Your Data.\nCross-Border Transfers If you visit our Sites from a country other than the United States, your communications will likely result in the transfer of Your Data across national borders. Our servers or offices may be located in countries other than the country from which you access our Sites, also resulting in the transfer of Your Data across international borders. If you provide Your Data when visiting one of our Sites from outside of the United States, you acknowledge and agree that this data may be transferred from your then-current location to our offices and servers and to those of our affiliates, agents, and service providers located in the United States and in other countries. The United States and such other countries may not have the same level of data protection as those that apply in the jurisdiction where you live.\nLinks The Sites contain links to other sites, organizations, and resources. Please be aware that we cannot be and are not responsible for the privacy or security practices of such other sites. We encourage you, when you leave our Site(s), to read the privacy statements of those other sites that collect personally identifiable information, and to have up-to date security and anti-virus software on all of your devices. This Privacy Statement applies only to the Sites.\nAccessing and Updating Personal Data Users may request access to their Personal Data collected and stored by us (if any) to request that it be deleted. Requests may be sent by email to info at isc.org.\nData Retention We will only retain Your Data stored on our servers in accordance with the legitimate needs of our business and as required or permitted by applicable law. We will not retain any unused Personal Data on our systems longer than necessary for legitimate business purposes.\nSocial Media, Blogs \u0026amp; Discussion Groups Please note that this Privacy Statement does not apply to any posting by you in any of our social media groups, blogs, fora, chat groups, or open source development sites, including issue trackers or similar interactive areas of our Sites. Your participation in those discussion areas and anything you post in those areas constitutes your public disclosure and may be attributed to you and displayed, republished, and otherwise disseminated by us in accordance with the terms of use agreement you agree to abide by in order to participate in those areas.\nCompliance Our collection and use of any of Your Data is subject to the laws and regulations of the countries and political subdivisions in which our Visitors and Subscribers reside. We are and remain committed to complying with all such legal obligations and use these legal requirements as the minimum beginning point for our use and collection of Your Data. Included in these laws and regulations are (a) the GDPR, which governs, among other things, consents, uses, and cross-border transfers of personal data concerning European Union residents, and (b) the California Online Privacy Protection Act, governing such matters with respect to California residents. If you have any questions regarding this Privacy Statement or you feel that the Sites are not following these legal requirements or our stated information policy, please contact us by email to info at isc.org or at the addresses or phone numbers listed in the Contact Us section of the Sites. You may also contact us at that address if you have any concerns about the accuracy of, or wish to correct, your Personal Data we have collected from you.\nControl of Your Personal Information (\u0026ldquo;Do Not Track\u0026rdquo; Notice) We respect enhanced user-privacy control and support the development and implementation of a standard “Do Not Track” (DNT) browser feature, designed to provide users universal and persistent control over the collection, sharing, and use of information by third parties regarding their web-browsing activities. Once the specification is finalized we intend to honor users’ requests with respect to browser tracking.\nChildren / Minors The Sites are not targeted at, directed to, or intended for use by children under the age of thirteen. No person under the age of thirteen should use any Site or under any circumstances provide any Personal Data or other information at a Site. If you become aware that any individual under the age of thirteen has used any Site, please contact us immediately at info at isc.org. By use of any Site you represent and warrant that you are over the age of thirteen.\nYour California Privacy Rights California Civil Code Section 1798.83 entitles California residents to request information concerning whether a business has disclosed Personal Data to any third parties for their direct marketing purposes. As stated in this Privacy Statement, we will not sell your Personal Data to other companies and we will not share it with other companies for them to use for their own marketing purposes without your consent. For further information concerning your California Privacy Rights including “Do Not Track,” visit https://oag.ca.gov/sites/all/files/agweb/pdfs/cybersecurity/making_your_privacy_practices_public.pdf\nCalifornia Website Data Collection We do not knowingly allow other parties to collect personally identifiable information about your online activities over time and across third-party web sites when you use the Sites.\nContact Us Any user, including California residents, who wishes to request further information about our compliance with these requirements, or who has questions or concerns about our privacy practices and policies, may contact us at info at isc.org or by mail at:\nInternet Systems Consortium\nAttn: Chief Administrative Officer\nPO Box 360 Newmarket, NH 03857 USA\nSubscriber Personal Data, Communication \u0026amp; Database “Subscribers” are defined as individuals, companies, organizations and/or institutions that choose to register with and have an agreed relationship with us. All of Your Data as a Subscriber is covered by this Privacy Statement.\nAs a Subscriber to an ISC mailing list, you may review or update your Personal Data by logging into the Mailman portal (https://lists.isc.org/mailman/listinfo) and reviewing or changing your Personal Data directly.\nYou may also inform us by email, phone, or postal mail directed to the contact information provided on our Sites.\nChanges to This Privacy Statement We routinely update this Privacy Statement to provide additional explanation and clarification of our practices and to reflect new or different privacy practices, such as when we add new services, functionality, or features to our Sites. You can determine when this Privacy Statement was last revised by referring to the Effective Date above on this page.\nAcknowledgements This Privacy Statement was adapted from the excellent privacy statement published by the Internet Society (ISOC). While we take full responsibility for ISC’s Privacy Statement, we want to acknowledge that we benefited extensively from their document.\n","permalink":"/privacy/","tags":null,"title":"Statement of Privacy"},{"categories":["BIND","Open Source"],"contents":"Ondřej Surý gave this talk at RIPE 91: SLIDES, RECORDING\nAbstract At ISC, we have developed a system based on git-changelog and danger that streamlines the changelog and release notes creation based on the merge request description. This in turn simplifies rebasing and merging (no more conflicts in the changelogs).\nWe also have couple more tools that help us improve the quality of the merge requests (danger based checks, linters, SPDX checks, etc\u0026hellip;).\n","permalink":"/blogs/2025-release-process/","tags":["open source","BIND"],"title":"Streamlining the release process from the git"},{"categories":["BIND","Open Source","DNS"],"contents":"Last week, ICANN published a report on the importance of open source in the DNS. I contributed to this report, along with people from the other open source DNS systems, DNS operators, and academic researchers with DNS expertise.\nThe Domain Name System Runs on Free and Open Source Software (FOSS) As governments are becoming focused on the impact of cybersecurity on national security and business productivity, they naturally want to implement regulations to ensure some baseline security practices. In the past several years, we have seen, in the US, Executive Order 14028, NIST SSDF 1.1 - Recommendations for Mitigating the Risk of Software Vulnerabilities, The CISA Zero Trust Maturity Model), CISA\u0026rsquo;s Secure Software Development Attestation program, and Executive Order 14144, \u0026ldquo;Strengthening and Promoting Innovation in the Nation\u0026rsquo;s Cybersecurity.\u0026rdquo; In the European Union, we had The NIS2 Directive, which extended reporting requirements to DNS resolver operators, followed by the Cyber Resilience Act (CRA) which aims to secure products traded in the EU which incorporate software, and the updated Product Liability Directive, which extended product liability to software products. The trend is continuing, with, in 2025 the publication in the UK of a voluntary \u0026ldquo;Software Security Code of Practice\u0026rdquo; and the work to develop regulations in countries across the EU to implement the CRA.\nThese regulatory efforts span a wide range of issues in software security, including:\nMaintenance and updating across a reasonable software lifecycle Vulnerability handling, patching and reporting vulnerabilities, exploits and system breaches Adherence to software development best practices Supply chain security, ensuring the integrity of software builds ISC already adheres to most of the best practices that these regulations seek to enforce, although in a few areas, (e.g. SBOMS) standardized solutions are just beginning to emerge. However, in general, we are skeptical that regulations are the most effective way to improve open source sofware security and we are concerned about the disproportionate impact the cost of compliance with regulations could have on non-profits producing open source.\nThis report is addressing the (understandable) lack of background, on both the DNS as a core Internet system, and open source as a culture, development process, and business system, among government regulators. The report cites some examples of recent regulatory efforts to point out where they are consistent with supporting open source.\nThe report concludes that it is important to:\nRecognize the critical role of FOSS in the DNS Consult the open source community early in the process of designing new regulations or programs Recognize the unique sustainability challenges of open source Build solutions that leverage the collaborative strength of open source Assign the burdens of compliance on those most able to bear them Please feel free to recommend this report to anyone who could benefit from a primer on the importance of open source in the DNS.\nThe Domain Name System Runs on Free and Open Source Software (FOSS) References:\nISC and the EU Cyber Resilience Act\nA Plea for Fairness for Non-profit Developers of Open Source Software\nEO 14144\nEO 14028 (now amended by the new administration)\nCISA Secure Software Development Attestation (which brought the prospect of criminal penalties for false statements!)\nCISA Secure by Design Pledge\nCISA Zero Trust Maturity Model\nNIST Special Publication 800-218 (Secure Software Development Framework(SSDF)\nNIST Special Publication 800-53 (Security and Privacy Controls for Information Systems and Organizations)\n","permalink":"/blogs/2025-FOSS-in-DNS/","tags":["BIND","Open Source","DNS"],"title":"The Domain Name System Runs on Open Source Software"},{"categories":["ISC","Company News"],"contents":"It\u0026rsquo;s time to meet another of ISC\u0026rsquo;s engineers! This time, let\u0026rsquo;s get to know Włodek Wencel, our DHCP Quality Assurance Manager.\nWłodek has been with ISC for his entire professional career, nearly 11 years. He started as an intern after working with Tomek Mrugalski on the Forge system testing tool, as part of his master\u0026rsquo;s degree program at the Gdańsk University of Technology. Over the years, Włodek has progressed from intern to junior engineer to senior engineer, and in 2024 he became ISC\u0026rsquo;s DHCP QA team leader.\nInitially, working on open source was not a focus for him; he was just looking to collaborate with smart engineers that he could learn from. But now that he has seen how rewarding it is to work on an open source project, he has no plans to change anytime soon. He says, \u0026ldquo;Knowing that my work can improve the usability and quality of a product that anyone can use - and that is used all around the world - is amazing and hugely motivating to do better work each day.\u0026rdquo;\nWhen Włodek started at ISC, he had to build our QA processes and procedures from scratch. For many years he was our sole DHCP QA engineer, and he is proud that he was able to improve our coverage and develop multiple tools that have helped ISC create high-quality code. He is also very glad that he is no longer the only person on the QA team. He says, \u0026ldquo;As team leader, I am responsible for the work of the entire QA team, from planning to helping QA engineers do the best work possible. Fortunately, I still have time for engineering work, designing and writing system tests, testing environments, and release engineering.\u0026rdquo;\nHe describes our staff as \u0026ldquo;a community, even a family\u0026rdquo; and says that\u0026rsquo;s one of the best things about working at ISC. He adds, \u0026ldquo;At ISC, I feel appreciated and important.\u0026rdquo; Włodek has also taken great advantage of the fact that we all work remotely: \u0026ldquo;Throughout the years, I\u0026rsquo;ve worked in Spain, Greece, Georgia, Austria, Thailand, Vietnam, Mexico, and small islands like Madeira and Sardinia. I\u0026rsquo;ve traveled around Europe on my motorcycle with an \u0026lsquo;office\u0026rsquo; in my bags.\u0026rdquo;\nWhen he\u0026rsquo;s not at his desk, you\u0026rsquo;ll find Włodek on his motorcycle, at a heavy metal concert, participating in some extreme winter sports, or enjoying a nice glass of wine.\nWłodek and his work are very important to his coworkers and our customers, and we\u0026rsquo;re lucky to have him as part of our team!\nThank you for reading this profile; we hope that sharing a little information about our staff members helps strengthen our connection to our customers and users. We welcome your feedback at marketing@isc.org, and thank you for using ISC’s software!\n","permalink":"/blogs/isc-profile-wencel/","tags":["Kea","Profiles"],"title":"Meet an ISC Staff Member – Włodek Wencel!"},{"categories":null,"contents":"Is your DNS or DHCP implementation consistent with best practices? Find out with a configuration review from ISC. Have you recently inherited the system you have today?\nAre you considering transitioning to a cloud provider?\nAre you concerned that attacks on your DNS could make your sites unreachable?\nDo you want to enable DHCP failover?\nIt is not unusual to neglect core infrastructure like DNS and DHCP until something goes wrong. If you haven\u0026rsquo;t done a review recently, a confidential independent audit could uncover gaps that have gone unnoticed.\nWe offer two review options for our support subscribers: Configuration Audit The Configuration Audit is a review of selected configuration files, performed by our technical support team.\nWe start with an exchange with the administrator, to see if there are particular areas of concern, and then agree on what files or documents will be shared and reviewed. We will send you a list of questions, to uncover the basics about what you are trying to achieve. A support engineer will review the submitted configuration files, looking for best practices, inconsistencies, or deprecated configuration directives. (Note, they are not looking for typos!) We will then write up a report, and share it with the administrator via a support ticket. Often a video call is scheduled to discuss the findings.\nThe Configuration Audit is available to support subscribers at no charge, upon request.\nConfiguration Review The Configuration Review is a meeting, at your location or via web conference, to review your current running system and configuration and discuss options with other relevant network teams in your organization. This is a type of consulting engagement and the goals and time invested can vary depending on your needs.\nA Configuration Review can help you coordinate appropriate support across your organization, including:\nMonitoring/Logging Technology Planners IS Strategy/Architects Network Operations Network Support/Engineering Provisioning/Automation Security Project Management For more background on what might happen during a configuration review, watch the video on YouTube, BIND Best Practices from Day One.\nService Configuration Audit (support service) Configuration Review (consulting engagement) Format Document review Meeting Location Remote, our premises On-site, your premises Report Written report, delivered via support ticket or email Written report, with video conference to review and discuss findings Time commitment 4 hours of engineer review time 8 hours on-site or via web, plus analysis and report writing, plus one readout meeting Following either assessment (the Review or the Audit), we will provide you with a written report, to include:\nExecutive summary, including our overall assessment of the current health of the system Observations (positive results) Recommendations (opportunities for improvement) Further considerations (suggestions for planning, potential longer-term actions) List of the records and data reviewed ","permalink":"/technicalservices/","tags":null,"title":"Professional Services"},{"categories":null,"contents":"Protect your network, your data, and your business. Cyber criminals target software vulnerabilities. Many of these vulnerabilities are well-known, with published announcements and patches. System administrators who do not promptly patch leave their systems wide open to abuse, DDoS, data loss, and other compromises.\nPatching your system and application software is one of the easiest things you can do to protect your network, your data, and your business.\nTIMELY patching is essential for security. Time is so essential that the US Department of Homeland Security issued a directive in April 2019 requiring mandatory patching within 15 days for critical vulnerabilities. Often, hackers need much less time than that.\nKeeping your DNS up-to-date is critical. According to Amin Vahdat, Engineering Fellow and Vice President, Systems Infrastructure at Google, in a keynote presentation at NANOG 78:\nNetwork failures are most likely to cascade, to spread, and to have widespread impact. \u0026hellip; Internet increasingly hosts services critical to our day to day lives.\nAlthough power or compute failures can be bad, it\u0026rsquo;s unlikely that one of those will take out your entire company. The same can\u0026rsquo;t be said of your network infrastructure.\nYour core network assets are the foundation on which everything else rests.\nSo why are so many organizations running unpatched DNS software in their critical infrastructure? It is tedious and repetitive work to constantly update systems. There are over 6,000 vulnerabilities announced every year. Patching everything is a constant, unending task.\nUpdating from ISC\u0026rsquo;s restricted-access repository takes less than a minute, after a one-time set-up process that takes less than 3 minutes. (See a video of the process)\nYou can\u0026rsquo;t do everything at once, so how should a system administrator prioritize? How do you know which of your systems are really CRITICAL?\nWhen your DNS is compromised or unavailable, many other essential systems break. It can become impossible to find your organization\u0026rsquo;s website on the Internet, costing you customers and money. You can lose email and phone service, crippling your organization. Your employees can\u0026rsquo;t find or connect with sites or applications outside your network, and you may even lose access to your internal user authentication systems.\nYou MUST make it a priority to keep your DNS available and secure.\nProblem easily solved. Patching your BIND DNS systems promptly when there is a vulnerability is actually not hard - and ISC can help.\nSubscribe to ISC\u0026rsquo;s reasonably-priced Early Vulnerability Notification service. You will receive a notification from ISC anytime there is a reported vulnerability that is about to be announced. You will have 3-5 business days of advance notice in which to update your systems, before the vulnerability is revealed to the public. You can configure your RHEL or CentOS systems to update from ISC\u0026rsquo;s private package repository of BIND 9 packages, which is always up-to-date, so updating is simply a one-line command. Get More Information!\n","permalink":"/official-packages/","tags":null,"title":"BIND 9 Packages from ISC"},{"categories":null,"contents":"One software vulnerability can enable a successful denial of service attack, disabling critical network services. Most of the vulnerabilities discovered in BIND 9 are ways to trigger INSIST or ASSERT failures, which cause BIND to exit. When an external user can reliably cause the BIND process to exit, that is a very effective denial of service (DoS) attack. Nanny scripts can restart BIND 9, but in some cases it may take hours to reload, and the server is vulnerable to being shut down again.\nAny vulnerability that caused an interruption in your Kea DHCP service, could cause cascading failures on your network as existing leases expire and devices fail to acquire new leases.\nWhy take unnecessary risk? Protect yourself and your organization by subscribing to ISC\u0026rsquo;s EVN service.\nThis annual subscription-based service protects your BIND, Kea and Stork servers. Vulnerability notification is included in all of our software support subscriptions. We also will sell just the vulnerability notification, without technical support, for an annual fee. You may designate up to four individuals to be notified in the event of a security vulnerability. Your organization will receive notification before the general public, with a patch or patched version of BIND three days prior to any public announcement.\nPlease note that EVN requires execution of a Non-Disclosure Agreement, to protect both ISC and its customers.\nHow Does the EVN Work? ISC follows a careful, published process for handling all serious reported issues.\nWe are usually able to handle BIND 9 vulnerabilities with our managed disclosure process.*\nWhen a vulnerability is discovered, either through our own testing or by a private report to ISC, we first verify the problem and then we work around the clock to address it. Once we have a solution, we schedule a coordinated public announcement. As much as five days (and at least 3 business days) BEFORE the public announcement, we notify our subscribers of the problem, individually and privately, and offer them a revised version of the software that fixes the problem.\nIf you are running a current version of a major operating system, we have a restricted-access repository for our subscribers, where we update the packages for you.\n* In some cases, a vulnerability is disclosed publicly by the reporter, in which case we are not able to manage the disclosure.\nYou can protect your core network applications. Contact us to find out how.\nHow Often Are There Vulnerabilities in BIND 9? BIND 9 is not on the list of the top 50 software applications as far as reported security vulnerabilities, but we do typically learn of 4-5 new serious vulnerabilities every year. Most of the new vulnerabilities discovered have been in the software for years, but they are exposed by new software \u0026ldquo;fuzz testing\u0026rdquo; techniques that can hammer the software with random malformed messages until one impairs the server\u0026rsquo;s function. Even if an existing vulnerability has never been used in an Internet attack, it is still important to update BIND 9 servers to prevent future abuse.\nThe website cvedetails.com displays information on past vulnerabilities by vendor and product. ISC maintains the original announcements in our Knowledgebase, along with a matrix showing which vulnerabilities apply to which BIND 9 releases.\nYour subscription helps to sustain ISC\u0026rsquo;s open source efforts! ","permalink":"/evn/","tags":null,"title":"Early Vulnerability Notification (EVN)"},{"categories":null,"contents":"If you suspect you have found a vulnerability in BIND 9, Kea DHCP, Stork, or ISC DHCP, or if you wish to inquire about a vulnerability that you have learned about which has not yet been publicly announced, ISC encourages you to take one or more of the following actions, as appropriate:\nOpen a confidential GitLab issue (preferred) or send email to bind-security@isc.org - for BIND 9-related vulnerabilities Send email to kea-security@isc.org - for Kea DHCP-related vulnerabilities Send email to stork-security@isc.org - for Stork-related vulnerabilities Emails to any of the above addresses automatically create secure, confidential issues in ISC\u0026rsquo;s GitLab instance.\nsecurity-officer@isc.org - for any other security issues* * If possible, we ask that you please encrypt your communications to the security-officer@isc.org address using the ISC Security Officer public key found on our PGP Key page. Our OpenPGP keys are also available from our FTP site.\nMore information is available about How to Submit a Bug Report.\nLearn more about ISC\u0026rsquo;s Software Defect and Vulnerability Disclosure Policy.\nReporting a Bug That Is NOT a Software Vulnerability Please report bugs in BIND 9 by opening an issue in our BIND GitLab. Please report bugs in Kea at our Kea GitLab. Please report bugs in Stork in our Stork GitLab. Please report ISC DHCP bugs at our ISC DHCP GitLab. Ensuring You Are Not Running Software With a Known Vulnerability For a listing of vulnerabilities in BIND 9, please see the BIND 9 Vulnerability Matrix in ISC\u0026rsquo;s Knowledgebase. Kea and ISC DHCP CVEs are also available in our Knowledgebase.\nTo ensure that you are notified of any new discovered vulnerabilities, you should become an ISC support subscriber, which entitles you to early notification of vulnerabilities via a secure, private support queue.\nYou can also follow ISC vulnerability notices by subscribing to one of our community mailing lists. Please subscribe to the BIND-announce, Kea-announce, and/or DHCP-announce list(s), as appropriate.\nISC uses the CVSS calculator, a program of first.org and NIST, to determine the severity of potential security issues. We invite users to read more about our CVSS Scoring Guidelines in our Knowledgebase.\n","permalink":"/security-report/","tags":null,"title":"Reporting Software Vulnerabilities"},{"categories":null,"contents":"Are you thinking of using ISC\u0026rsquo;s software? Sure, we think it\u0026rsquo;s great, but don\u0026rsquo;t just take our word for it; see why our existing customers and supporters use and support ISC\u0026rsquo;s software!\nThe University of New Hampshire Interoperability Lab uses Kea as part of their certification testing for DHCP. Read an interview with Tim Winters, IPv6 logo Technical Lead for the lab about how and why he uses Kea.\nInternetNZ operates the .nz registry for New Zealand. They offer internationalized domain names in the Māori language, both under a separate third-level domain and in .nz. InternetNZ engineers are frequent participants in DNS-OARC conferences. Sebastian Castro of InternetNZ personally contributed to the development of BIND\u0026rsquo;s dnssec key-manager utility.\nInternetNZ subscribes to ISC\u0026rsquo;s Early Vulnerability Notifications for BIND.\nSWITCH operates the Swiss country-code TLDs, .ch and .li. SWITCH subscribes to ISC\u0026rsquo;s Advance Security Notification service, so they can receive BIND patches for critical security issues a few days before they are announced.\nSWITCH uses BIND resolvers and dnsdist to provide recursive DNS services to users of the Swiss higher education and research network NREN. SWITCH also offers a DNS Firewall service which supports BIND with RPZ. Oli Schacher of SWITCH discovered and reported a serious bug in RPZ, which ISC fixed in CVE 2017-3140.\nSWITCH not only supports ISC with a regular EVN contract, they also made an unrestricted donation of CHF 10,000 to ISC in 2017.\nCIRA is the Canadian Internet Registry Authority. CIRA also now provides a DNS Anycast service and a DNS security service for enterprises. CIRA has supported ISC for many years; they were one of the Patron supporters of ISC\u0026rsquo;s BIND 10 project, which lives on in the Kea DHCP server, and they have made many advances in the DNS community, particularly in operationalizing DNS security (DNSSEC). CIRA contributes actively to the DNS technical community through DNS-OARC, as you can see from this presentation from Evan Thompson on the Provisioning Performance of Name Server Software.\nThrough their support relationship with ISC, CIRA is helping to underwrite the maintenance of ISC\u0026rsquo;s open source for everyone.\nSIDN operates the .nl country registry for the Netherlands, the first active country top-level-domain in the DNS, outside of the US. SIDN Labs authors tools and conducts research projects to help support the Internet and DNS in particular.\nAs an ISC support customer, SIDN is also helping to support the open source you use!\n","permalink":"/testimonials/","tags":null,"title":"What Do Our Users Say?"},{"categories":null,"contents":"Thank you for your interest! ISC has a long history of finding the best people to focus their talents and passions on the mission of supporting the infrastructure of the Internet. If you think of yourself as an expert with nothing else to learn, then ISC is not the place for you. ISC is a continuous discovery experience as we find new ways to meet the demands of an essential and expanding Internet.\nISC’s commitment to open source as a critical tool for the Internet’s success means that your work will see the light of day and it will change the world.\nAt ISC, you will get a chance to work with people that are here for the same challenge and the same relevance as you; we are passionate advocates for the open Internet.\nOpenings There are no job openings at this time.\nPlease check back regularly to see if new openings have been posted. The current list of open jobs at ISC is always available here.\nISC is an equal-opportunity institution and adheres to a non-discrimination policy. This policy applies to all ISC staff members, board members, contractors, clients, and constituents.\nISC follows an equal-opportunity employment policy and employs personnel without regard to race, creed, color, ethnicity, national origin, religion, sex, sexual orientation, gender expression, age, physical or mental ability, veteran status, military obligations, or marital status.\nIn addition, this policy applies to internal promotions, training, opportunities for advancement, terminations, outside vendors, members or customers, service clients, use of contractors and consultants, and dealings with the general public.\n","permalink":"/careers/","tags":null,"title":"Careers at ISC"},{"categories":["Kea","Open Source"],"contents":"ISC is excited to announce the release of Kea 3.0.0! This is a major release, and is the first Long-Term Support (LTS) version of Kea.\nThe software and release notes can be downloaded from our website at https://www.isc.org/download/#Kea.\nMore Open, with Longer-term Support With this release, we are ending maintenance of Kea 2.4 and opensourcing TWELVE Kea hooks that were previously under a commercial license. Now, Kea open source users should be able to use all the features of Stork, including managing host reservations and subnet configuration.\nWe have heard that you would like the option to upgrade Kea less frequently. We have been releasing a new stable version every year and supporting it for two years, but Kea 3.0 will be our first release with a three-year life span. We will continue supporting two stable versions at a time, including the 3.0 LTS version and one other. We also plan to have more frequent maintenance releases, at least one every six months for our stable versions.\nKea 3.0 includes a number of major changes which are not backwards-compatible. This is a significant release, which brings so many changes, that we decided to skip the Kea 2.8 version number and release Kea 3.0 as the next stable branch after Kea 2.6.\nUsers are strongly encouraged to read the Kea 3.0 release notes carefully before upgrading. Some changes will require configuration changes. Our helpful support team has compiled a list of things to be aware of when upgrading to Kea 3.0.\nHere are a handful of the most significant changes:\nThe installation process has changed because we have changed the packaging for Kea hook libraries. Most Kea hook libraries have become open source and are freely available; the only exceptions are the Role-Based Access Control hook (RBAC) and the Configuration Backend hook (CB), which remain commercially licensed. The open source hook libraries will be available in the Kea source tarball and for package installation from the official ISC repositories on Cloudsmith.io. The isc-kea-hooks packages in the open-source repository will contain many more hooks than in previous versions. Most users will no longer need to bother with access tokens to install Kea. For details on package installation, see: https://kb.isc.org/docs/isc-kea-packages.\nHopefully, all our users have already noted the security enhancements were made to Kea 2.4.2 and 2.6.3, as outlined in the following Common Vulnerabilities and Exposures: CVE-2025-32801, CVE-2025-32802, and CVE-2025-32803. In addition to these, other significant security changes were made in Kea 3.0. Administrators will need to change some file locations, set new passwords, and better secure remote management interfaces. A command-line switch can disable some of these security checks, but these decisions will require some careful thought. We have updated the security section in the Kea ARM to provide more guidance about securing your system.\nWe are planning to remove the Kea Control Agent in a future release. As of Kea 3.0, the CA is no longer needed to provide remote access. The DHCPv4, DHCPv6, and DHCP-DDNS daemons now have native support for API sockets over HTTP and TLS (HTTP), without the need to use the Control Agent (CA). This will give us a cleaner, simpler architecture, and reduce the opportunities for misconfiguration. Role-based Access Control (RBAC) hook users will have to update their configuration so that the RBAC hook is loaded by the Kea daemon, rather than the CA. This is a long-term refactoring project: the CA will still be supported in Kea 3.0, but will be removed in the next development branch.\nWe have made several changes in client-classification to make it more flexible and to ease the migration path from ISC DHCP. It is now possible to add options based on the client or the subnet or both, and the option inheritance has been adjusted to match that in ISC DHCP.\nModernizing our build system, moving from auto tools to Meson, was another major change, which will speed up the build process and be easier to maintain in the long run. Users who build their own images may find these notes to be helpful: https://gitlab.isc.org/isc-projects/kea/-/wikis/Processes/How-To-Meson.\nThe libraries that enable the use of MySQL or PostgreSQL database backends have been restructured so that these are separate libraries that users can choose to include or omit from their deployments. Users who are using database backends will now have to install these hooks, which were previously included with all installations.\nNew or recently updated KB articles Things to be aware of when upgrading to Kea 3.0 Fetching Kea Sources Migrating Kea to a New MySQL Database Server Standard DHCP Options Defined in ISC DHCP and Kea Kea HA Quickstart Guide Limiting DHCP Decline Facilitating Classification with Template Classes Other Helpful References Kea Administrator Reference Manual (ARM) - if you access this on the ReadTheDocs site, you may access any of the many available versions of the ARM. ISC Software Support Policy and Version Numbering - this has been updated to show the new longer lifecycle for Kea 3.0 Notes on Building Kea with Meson ","permalink":"/blogs/kea-3-0/","tags":["Kea","DHCP"],"title":"Kea 3.0, our first LTS version"},{"categories":["ISC","Company News"],"contents":"All of our staff profiles so far have been of our engineers, who are of course very important and make up the majority of ISC. But this time, we thought we\u0026rsquo;d focus on someone who regularly interacts with many of our customers and users: Vicky Risk, ISC\u0026rsquo;s Product Manager and Director of Marketing.\nVicky began her college years studying architecture, then geology, and then philosophy. But she ended up in the network computing world after getting a job at Massachusetts Institute of Technology (MIT), where, she says, \u0026ldquo;my first assignment involved sorting about 9,000 index cards three different ways, and typing them up into three lists. That was when I decided that was a job for a computer, and I set about computerizing that whole process.\u0026rdquo;\nShe adds, \u0026ldquo;I ended up with a motley assortment of early PCs with different operating systems and no pre-installed software. I got my boss to send me to database programming summer camp, and that was the end of typing lists. And while I was at MIT, I got an MBA from Simmons College.\u0026rdquo;\nWhat followed has been a satisfying product management career at a series of networking companies, including Cayman Systems (anyone remember the GatorBox?), Banyan Networks, Ascend Communications, Combinet Systems, and 15 years at Cisco Systems. After a layoff at Cisco and hanging out at home for a bit, doing some traveling and volunteer work, ISC president Jeff Osborn (whom she met while working at Cayman) asked Vicky if she was interested in a new job. She needed the health insurance, so naturally she said yes.\nWhen asked what her favorite thing is about working for ISC, Vicky responds \u0026ldquo;Health insurance? No, don’t say that. Maybe the general air of moral superiority? No, don’t say that either. It\u0026rsquo;s really that my colleagues are so smart and interesting and committed to their work. I also kind of like going to conferences and hearing about operator issues in an environment where nobody expects me to solve the problem!\u0026rdquo;\nIn her roles as both the Product Manager and Director of Marketing, Vicky says, \u0026ldquo;My job is to make our users aware of the contributions we are making at ISC, so that hopefully they will appreciate our work. I also want our staff to be aware of, and appreciate, the concerns of our users. One of my top priorities is to ensure that ISC remains trustable, which means striving for transparency and honesty as much as possible, particularly when we make a mistake. While we have applied for the occasional grant, I am very proud of the fact that ISC is funded almost entirely by our users, which means we are sincerely focused on their success.\u0026rdquo;\nVicky highlights another important aspect of her work: \u0026ldquo;Over the past year or two I have become more involved with external teams working to understand and influence the emerging wave of government regulations aimed at improving software security. It is hard to keep up with these efforts, but I have been able to make some contributions. I have enjoyed working with people from other open source organizations, but I am not convinced that more regulation will actually improve our collective cybersecurity posture.\u0026rdquo;\nVicky has an excellent sense of humor and is very self-deprecating. She says \u0026quot; I am known for making a lot of typos, but the good news is that I\u0026rsquo;m not involved in writing the software!\u0026quot; But she also has many strengths, including \u0026ldquo;a willingness to try things I don’t know how to do. One of the best things about working in a small company is the variety of things you get to do, which is much more interesting for me than a more specialized job.\u0026rdquo;\nAs always, we hope you\u0026rsquo;ve enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers and users. We welcome your feedback at marketing@isc.org, and thank you for using ISC’s software!\n","permalink":"/blogs/isc-profile-risk/","tags":["Profiles"],"title":"Meet an ISC Staff Member – Vicky Risk!"},{"categories":["ISC","Open Source","Company News"],"contents":"\nFred Baker, ISC Board Member from 1994 - 2025\nYesterday ISC, and the Internet as a whole, lost a generous long-time contributor.\nFrederick J. Baker (February 28, 1952 - June 18, 2025) was one of the original members of the ISC Board of Directors, appointed at ISC\u0026rsquo;s incorporation in 1994.\nFred had a long career in the communications industry, working for Control Data Corporation, Vitalink Communications, Advanced Computer Communications, and for 22 years, at Cisco Systems.\nAfter retiring from Cisco, Fred worked as a contractor, notably for the Internet Society and ISC. In addition to serving on the ISC BOD, in 2017 he joined the Root Server System Advisory Committee of ICANN, representing ISC. He served as Co-Chair of RSSAC from October 2018 to December 2019, and as Chair from January 2020 through December 2022.\nFred volunteered a lot of his time to working with the Internet Engineering Task Force (IETF), the body that develops standards for the Internet. He chaired a number of IETF working groups, including several that specified the management information bases (MIB) used to manage network bridges and popular telecommunications links, and the IPv6 Operations working group. He served as IETF chair from 1996 to 2001, and he served on the Internet Architecture Board from 1996 through 2002. Fred co-authored or edited at least 60 Request for Comments (RFC) documents on Internet protocols, and contributed to others. The subjects covered include network management, Open Shortest Path First (OSPF) and Routing Information Protocol (RIPv2) routing, quality of service (using both the Integrated Services and Differentiated Services models), Lawful Interception, precedence-based services on the Internet, and others.\nIn addition, he served as a member of the Board of Trustees of the Internet Society from 2002 through 2008, and as its chair from 2002 through 2006. He was a member of the Technical Advisory Council of the US Federal Communications Commission in 2004. He worked as a liaison to other standards organizations such as the ITU-T. In 2009-2010, he served as chair of the RFC Series Oversight Committee.\nHe represented IETF on the National Institute of Standards and Technology Smart Grid Interoperability Panel and Architecture Committee from 2008-2013, and was Cisco’s representative to a Broadband Internet Technical Advisory Group. He also holds several patents.\nFred was committed to the collaborative, consensus-driven process of creating open standards for the Internet and he demonstrated his commitment throughout his long career with years of active volunteering. Besides his leadership roles, he also welcomed and mentored new participants in the IETF.\nFred was a wonderful guy, an Internet luminary, and a great friend to ISC over the course of decades of board membership, as well as representing ISC at RSSAC as chair and in many other roles in the IETF, ICANN, and ISOC worlds. Fred will be dearly missed.\nWe extend our deepest condolences to Fred\u0026rsquo;s family.\n","permalink":"/blogs/fred-baker-memoriam/","tags":["Open Source","Root Server System"],"title":"ISC recognizes Fred Baker's years of service to the Internet"},{"categories":["Stork","BIND"],"contents":"Stork 2.2 adds DNS support Since the last stable version of Stork we have begun adding support for monitoring DNS alongside DHCP. You will now see any discovered BIND servers in the Stork machines view.\nAfter authorizing some BIND servers to share information with Stork, you will see a DNS dashboard appear (below the DHCP dashboard). Both DNS and DHCP dashboards are collapsible. The dashboard pictured illustrates Stork\u0026rsquo;s \u0026ldquo;dark mode\u0026rdquo; setting.\nStork 2.2 DNS zone viewer Stork is now able to retrieve the list of DNS zones from all detected BIND 9 servers.\nThis includes details about the zone (zone type, serial number, the associated server or servers). Note that Stork is aware of views and associates the zones with the appropriate view. The resource records (RRs) of each non-built-in zone can be retrieved using AXFR and displayed. It is easy to filter out built-in zones, which makes it more obvious which are authoritative servers and which are resolvers. The zone list compares serial numbers between servers and highlights any discrepancies, as shown in the screenshot below. We are working towards a solution for monitoring and troubleshooting zone transfers. This is an example of an operational detail that is difficult to observe from looking at a single DNS server, and is one way we hope Stork can facilitate DNS management. Zone details The zone details show the records included in the zone.\nHost reservation migration to a database Kea allows storage of host reservations in a configuration file or in a database; the former is usually preferred in smaller deployments.\nAs the network grows, administrators may want to adopt a database solution. Starting with this release, Stork is now able to migrate hosts from a file to a database backend. This is the first time data migration from the Kea configuration to a database backend is supported by Stork. In the future, we may implement a similar feature for subnets, for example. We would appreciate any feedback regarding the migration feature\u0026rsquo;s usability and experience in real deployments.\nRead-only administrator group When you use LDAP groups to authorize users, you can assign privileges based on LDAP groups.\nA new read-only group was added to the previously available admin and super-admin groups. This is a step towards more granular access permissions, which will enable teams to use Stork for help desk functions by less-privileged users.\nNew Kea classification features Stork is now able to handle the new client classification parameters (evaluate-additional-classes, client-classes) added in the Kea 2.7.4 release.\nPrometheus and Grafana The statistics exporter now adds the shared network name as a label in the metrics exported to Prometheus, which can be used to generate more descriptive charts in Grafana. Pool statistics are now exported to Prometheus.\nNew Stork demonstration system Still not sure?\nRecently, we have launched an online Stork demo system and we invite you to try it out at https://demo.stork.isc.org/login. This demo version resets every hour. If you would prefer a more extended trial, you can install our packaged demo version, which includes a DHCP traffic generator, a handful of virtual Kea servers, and of course the Stork agents and server. One of our QA engineers has recorded a short video that walks you through the entire installation process for the demo.\nStork 2.0 is now approaching EOL; please upgrade Stork has stable (even-numbered minor version, e.g. 2.0.1) and development (odd-numbered minor version, e.g. 2.1.2) releases; development releases are issued bi-monthly with some exceptions. New stable releases, such as Stork 2.2.0, are expected roughly every six months. When we release a new stable version, we plan to end support for the prior stable version after a 3-month transition period. For ISC\u0026rsquo;s detailed software support policy, see https://kb.isc.org/docs/aa-00896#stork.\nWe recommend using stable versions in production if possible. We encourage users to test development releases and report back their findings on the stork-users mailing list (available at https://lists.isc.org/mailman/listinfo/stork-users) or open GitLab issues. ISC professional support customers may also report issues via our support portal.\nDownload and installation To download Stork sources, go to ISC\u0026rsquo;s Downloads page. Release notes are posted beside the tarball download link. Most users prefer to install our packages for Debian, Alpine, or RPM-based systems from ISC\u0026rsquo;s Cloudsmith repo.\nThe Stork Quickstart Guide will get you up and running quickly. The Stork Administrator Reference Manual (ARM) may also be helpful.\nStork professional technical support available ISC is now offering professional technical support for Stork, as a line item on a Kea DHCP support contract. Support for Kea is available as an annual subscription, with several levels:\nBronze - Subscriber hooks (Role-Based Access Control and Configuration Backend); Advance Security Notifications (ASNs); email support during business hours Silver - Subscriber hooks (RBAC and Configuration Backend); ASNs; 24x7 email and phone support with SLAs Gold - Subscriber hooks (RBAC and Configuration Backend); ASNs; 24x7 email and phone support with faster response times The annual cost of our Kea support subscriptions is based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet. Please feel free to contact us for more information.\nReferences Stork online demo system Stork-users mailing list Stork Administrator Reference Manual (ARM) Stork Release Roadmap ","permalink":"/blogs/stork-2-2/","tags":["Stork","DNS"],"title":"Stork 2.2 adds DNS zone viewer"},{"categories":null,"contents":"Services provided by ISC are described expressly in a written Quote, which is subject to ISC terms and conditions of business. Any variation from those terms and conditions must be evidenced by a written agreement, signed by the Parties, expressly adopting different terms and conditions of business. Any Purchase Orders shall be for the sole purpose of defining quantities, prices and identifying the services to be provided. All other terms and conditions contained or referenced in any Purchase Orders are expressly rejected.\n","permalink":"/terms-and-conditions/","tags":null,"title":"ISC Terms and Conditions"},{"categories":["Stork","Open Source","Security"],"contents":"A serious HTTPS vulnerability prompted us to check on the security of communications between Stork agents and the Stork server. We found a significant vulnerability, which we fixed in Stork version 1.15.1 and disclosed with CVE-2024-28872.\nThe changes we made in Stork 1.15.1 limit access to gRPC commmunications to applications that have been previously authorized.\nOVERVIEW: An attacker can pretend to be a new Stork agent, send a Certificate Signing Request to the Stork server and then use the certificate generated by the server to attack legitimate agents by using the general HTTP/2 vulnerability covered by VU-421644. Furthermore, the attacker can send any commands to Kea, via the agent, including disruptive changes (e.g. config-write, config-set, shutdown).\nDETAILS: Stork uses gRPC for communication between the Stork server and the Stork agents. This connection is secured with TLS. The communication between the server and the agents is unidirectional. The servers initiate the connections to the agents. The agents do not initiate connections to the server over gRPC. The Stork server should be the only application able to connect to the agents. The agents must refuse connections via gRPC from any source other than the Stork server.\nThe Stork server generates two sets of certificates: the Certificate Authority (CA) certificate and the server certificate. The former is used to sign the signing requests (CSR) from the registering agents. The latter is used to communicate between the server and the agents. The server certificate is signed with the CA certificate.\nIn Stork 1.15.1 we fixed a security vulnerability whereby an attacker could generate and send a CSR to the Stork server to obtain a certificate signed with the Stork server\u0026rsquo;s CA certificate, and subsequently use this certificate to communicate with other registered agents over gRPC. The attacked agents didn\u0026rsquo;t properly recognize that the communication was established by an attacker rather than the Stork server.\nThe code fix involves several code changes. The most significant change is that when a registering agent sends the CSR to the server to sign its certificate, the server, besides the signed agent certificates, returns a fingerprint computed from the server\u0026rsquo;s certificate. The agent stores this fingerprint and compares it with the fingerprints computed from the certificates of the parties trying to connect to it over gRPC. If the fingerprints don\u0026rsquo;t match, the agent rejects the connection. It means that only the Stork server can effectively connect to the agents. Any other party trying to connect to the agent would present a different certificate with a different fingerprint.\nThe fixed version also puts some additional requirements on the certificates used in the gRPC communication. Specifically, the server sets the Extended Key Usage extensions to serverAuth and clientAuth in the generated server certificate. The presence of these extensions in the certificate presented by the Stork server is verified by the Stork agents. The agent certificates signed by the server include the serverAuth extension, whose presence is verified by the Stork server. Finally, the CA certificate must include the following Key Usage extensions: keyCertSign, digitalSignature, and the following Extended Key Usage extensions: clientAuth and serverAuth. Due to these new requirements, the existing server certificates are regenerated after the upgrade to the new Stork release. The agents must be re-registered to obtain the certificates signed using the regenerated server certificate and to obtain the server certificate fingerprint. Any externally generated server certificates that don\u0026rsquo;t meet the aforementioned requirements must be manually regenerated and imported to Stork using the stork-tool program.\n","permalink":"/blogs/stork-and-https/","tags":["Stork","Security"],"title":"Stork 1.15.1 and https vulnerabilities"},{"categories":["Kea","Open Source"],"contents":"Hurrah! Twelve new open source hooks in Kea 2.7.7! With the release of Kea 2.7.7, ISC has put twelve more Kea hooks under the MPL 2.0 open source license. These twelve hook libraries were previously commercially licensed, so this is a huge addition to the open source.\nThis is a significant step for the Kea team, demonstrating our confidence that Kea has achieved a level of financial sustainability where we can forego the revenue we might have earned from charging for these power-user features. What Kea hooks are now available? These hook libraries, previously licensed under ISC\u0026rsquo;s HBCEULA, have now been relicensed under the MPL 2.0 open source license with the release of the Kea 2.7.7 development version. They include all of the hooks in the Premium package, and most of the hooks in the Subscription package. They are:\nClass Cmds - allows configured DHCP client classes to be added, updated, deleted, and fetched without needing to restart the DHCP server. DDNS Tuning - adds custom behaviors related to Dynamic DNS updates on a per-client basis. Its primary feature is to allow the host name used for DNS to be calculated using an expression. Flex ID - allows the administrator to specify which field to use as the unique identifier for clients. Forensic Logging - creates a configurable lease history. GSS-TSIG library is used to authenticate DDNS updates via GSS-TSIG, using Kerberos mechanisms to retrieve dynamic keys. This capability is of particular importance to Windows networks, as this is usually the only method available in an Active Directory environment. Host Cache - provides a way to cache responses from other hosts, improving performance when using the RADIUS hook. Host Cmds - enables you to store host reservations in a separate backend MySQL or PostgreSQL database, and add or remove host reservations on demand, without restarting Kea. Stork relies on this hook to provide a graphical interface for managing host reservations in Kea. Lease Query - adds support for DHCPv4 Leasequery (RFC 4388), DHCPv4 Bulk Leasequery (RFC6926), DHCPv6 Leasequery (RFC 5007), and DHCPv6 Bulk Leasequery (RFC5460). Limits - apply a limit to the rate at which packets receive a response. The limit can be applied per-client class or per-subnet. Ping-check - the kea-dhcp4 server can perform ping checks of candidate lease addresses before offering them to clients. RADIUS - provide access control or accounting integration with a RADIUS server. Subnet cmds - allows listing, adding and changing subnet configuration in Kea without restart. Stork relies on this hook to manage subnets in Kea.\nFor now, the additional hook libraries are relicensed for the Kea 2.7 development branch only. Kea 3.0 will be the first stable version which includes all of these hooks in the open source. ISC recommends that most users rely on stable versions in production. The earlier stable versions already released will remain licensed as they were when they were released, but many Kea users may be interested in trying out some of the hooks they didn\u0026rsquo;t previously have access to in a test bed running this development release, just to see the functionality they enable in both Kea and Stork. Two hook libraries remain under the commercial license: Configuration Backend and Role-based Access Control.\nWhy were these libraries commercially licensed in the first place? We wanted to ensure a stable funding stream for the Kea team. We were cautious, because we had a lot of evidence that users didn't want or expect to contribute to development or maintenance of DHCP software, based on our experience with ISC DHCP. We had already observed that many of our users were empowered to purchase software, but were *not* able to make donations. Users who wanted to help sustain the open source they used often did not have any ability to make a donation, but they could buy something from us, if we offered to sell something. So, we decided to offer some optional features for sale and as an incentive for signing up for our technical support services.\nWe also thought that charging a small fee for use of some of these optional features was a fair way to spread the burden of sustaining the software across a large user base. We created an online store, where we offered an option to purchase and download a package of the most popular Kea hooks for $549. We reasoned this was a fair amount for a business to pay for network infrastructure software. How did online Kea hook sales work out? For the benefit of others who may be considering this mechanism, we can say that it was not a massive success. A lot of companies do not buy software the way you might purchase a software game. They want a quote, an invoice, competitive bids from multiple suppliers, a vendor portal set up, etc., etc. Our little online store turned out to be anything but hands-off, worry-free, or self-service. We did get some customers, including many who purchased multiple licenses, and came back to renew, and some of these eventually signed up as technical support customers. We are grateful to every one of these organizations. However, others struggled with both the online purchasing store and with the whole concept of software that was open source at the core with proprietary add-ons. We received inquiries from people who wanted us to distribute via resellers, or who had lots of administrative overhead in order to make even a small purchase. Users also needed a lot of help to determine in advance which features they would require. Those who did opt to purchase the add-ons, and those who subscribed for technical support, then had to jump through extra hoops to install the software, using special access tokens we had to create and maintain. This was extra work for both us and the users, and it didn't feel like a productive use of our energy. Why change the licensing now? ISC is a mission-driven non-profit and we don't need, or want, to take in any more money than we actually need to sustain the project. We now have enough users who have decided to pay us for technical support that this revenue is enough to cover our expenses. We don't need to sell access to software add-ons, and we would certainly rather not do that if we don't have to. It was always uncomfortable for ISC to have commercially licensed software, and was definitely difficult for users to navigate. We still think it would be fairer if all users contributed something, but logistically that turns out to be difficult. We planned the change to coincide with the upcoming release of Kea 3.0, because that is a major new stable version with other significant changes.\nWhat does this mean for Kea users? New Kea users can adopt Kea without worrying about whether they may discover they need some functionality in closed source. Current Kea users can now take advantage of more open source hooks, and nearly all users will be able to install Kea from our open source repositories, without having to include an access token in their download scripts. All of the hooks that Stork uses to configure subnets and host reservations in Kea servers are now open source.\nThere are only two hooks we have not yet put into the open source: the Configuration Backend and Role-based Access Control. These two hooks will be available to support subscribers only. These features have more limited applicability at the moment, and currently, Stork doesn't integrate with either of them (support for the Configuration Backend hook is on the Stork roadmap).\nMost current Kea users who install development branch packages from the access-controlled repository, https://cloudsmith.io/~isc/repos/kea-dev-prv, will want to switch to the open source repository, https://cloudsmith.io/~isc/repos/kea-dev. The formerly proprietary code has all been moved into the open source package repository and the only code remaining in the `-prv` repositories will be the two remaining proprietary hooks. Users who are entitled to the remaining proprietary hooks will need to install the `kea-dev` repository when installing Kea 2.7.7, because the install script for the proprietary software has to first retrieve and install the open source. The same pattern will apply when Kea 3.0 is released, and a new `kea-3-0` stable version package repo is created. This change is coming with the Kea 2.7.7 development version (March 2025), and the upcoming Kea 3.0 stable version (planned for release in June 2025). There will be no changes to the existing Kea 2.6.* and earlier versions; we don\u0026rsquo;t want to make such a major change in our existing stable versions, so already-released stable versions will remain under the license they were published with. Thanks to our users, Kea is now sustainable as open source! We are very happy that all the benefits of Kea and Stork are available to all users now.\n","permalink":"/blogs/kea-hooks-opensourced/","tags":["Kea"],"title":"Most Kea Hooks Open-sourced"},{"categories":["BIND","Open Source"],"contents":"In the computer world, we commonly store temporary data in a cache to make programs run smoothly. A web browser has a local cache for storing web pages you've visited, and it also stores cookies in the local cache. CDNs store web pages in geographically diverse locations around the world for faster access. Recursive DNS servers are essentially just large caches wrapped in the DNS protocol. However, we never have unlimited memory available for caching, whether it's a purely memory-based cache or temporary data stored on a hard drive. Limited memory leads to a crucial decision: what to keep and what to discard. And that's where cache-eviction algorithms come into play. Their task is to preserve data that is most likely to be reused and discard data that won't be needed in the near future.\nCache-Eviction Algorithms It's important to mention here that a key characteristic of cache-eviction algorithms is their simplicity. Complex algorithms may have better performance, but their diagnostics are much more difficult in the event of strange behavior. And since these algorithms are usually located in the so-called hot-path (i.e., a performance-critical area), more complex algorithms put more load on the CPU. It may happen that we achieve a better cache-hit/cache-miss ratio, but the overall performance will be lower.\nI've included a basic overview here of only the most common cache-eviction algorithms. For a more detailed overview, you can refer to the Wikipedia article Cache replacement policies.\nBélády\u0026rsquo;s Algorithm Bélády's algorithm is a purely theoretical concept that we can also call the clairvoyant algorithm. It's an algorithm that evicts items from the cache that will be needed the furthest in the future. However, since we don't have a crystal ball, this concept is only used when comparing the efficiency of algorithms that can actually be implemented.\nRandom Choice This algorithm randomly selects items from the cache to be evicted. At first glance, this strategy doesn't seem like a good one, but it turns out that for some workloads, it may not be a bad choice.\nFirst In First Out (FIFO) and Last In First Out (LIFO) These two simple algorithms based on a linked list indiscriminately evict items purely based on when they were added to the list. FIFO evicts items that entered the list first, so it works like a queue. LIFO, on the other hand, evicts items that entered the queue last, so it behaves like a stack.\nLeast Recently Used (LRU) As the name suggests, this cache-eviction policy evicts items from the cache that have been used the least recently. A simple LRU algorithm is implemented by adding new items (cache-miss) to the beginning of the list as long as there is space in memory, and existing items (cache-hit) are moved to the beginning of the list when accessed. When there is no more space to store new items, items are removed from the end of the list. LRU is actually a whole family of algorithms that always improve the original simple LRU algorithm in some way.\nLeast Frequently Used (LFU) The family of LFU algorithms is based on counting the frequency of use of items in the cache. After the memory is full, the algorithm evicts items from the cache that have been used the least frequently. This algorithm is more suitable for some types of workloads, but it is more complex to implement.\nSIEVE Design SIEVE is a new cache-eviction algorithm that excels in its efficiency and simplicity. The implementation of SIEVE needs one queue (e.g., a linked list) and one pointer, which we call the hand. The queue maintains the order of insertion of items, and each item in the queue must contain one bit to indicate whether it has been visited or not. The hand points to the next candidate for removal from the queue and moves through the queue from the end to the beginning. If the hand is at the beginning of the queue, it moves back to the end if necessary.\nCache-hit. If the item already exists in the queue, it is simply marked as visited and the queue is not manipulated in any other way. If the item is already marked as visited, no change occurs.\nCache-miss. As new items arrive (cache-miss), the SIEVE algorithm always inserts them at the beginning of the queue. When there is no more space in the queue, the algorithm looks at the item pointed to by the hand. If the item is marked as visited, this flag is removed, and the hand moves to the previous item – as explained above, the hand moves towards the beginning of the queue. This is repeated until the hand points to an item that is not marked as visited. Then, this unvisited item is removed from the queue, creating space to store the new item in the cache. Although most of the removed items are located somewhere in the middle of the queue, new items are always stored at the beginning of the queue.\nThe whole algorithm can be nicely seen in the following picture (taken from the project's website):\nLazy Promotion and Fast Demotion Promotion and demotion are two basic principles for sorting items in the cache. It turns out that lazy promotion and fast demotion are two important properties of good cache-eviction algorithms. The principle of lazy promotion refers to the strategy of promoting items to be retained only when space needs to be freed in the cache. The goal is to preserve items with minimal effort. Lazy promotion increases throughput due to less need for computational power and efficiency, because we have more information about all items and their usage at the time when unnecessary items are discarded. Fast demotion works by removing items very quickly after they are inserted into the cache. The research linked above shows that fast demotion is also suitable for web caches. SIEVE is currently the simplest algorithm that implements both principles.\nLow Resistance to Scanning the Entire Cache I do have to mention one disadvantage of SIEVE. The SIEVE algorithm is not inherently resistant to sequential scans, which makes this algorithm unsuitable for deployment in places where this is a frequent operation. Accesses generated by a sequential scan intertwine with popular objects, and the performance of SIEVE is worse in such a case than when using LRU. Marc Brooker on his blog provides a few ideas on how to deal with this when using SIEVE; using a small counter instead of a simple flag could allow SIEVE to be used in deployments where the cache is often scanned in its entirety.\nSIEVE in BIND 9 – Implementation and Performance SIEVE is a very simple algorithm to implement. Its initial implementation for BIND 9 took a very short time, and as a result, the cache code after removing the existing cache-eviction mechanisms in BIND 9 and replacing them with SIEVE is altogether leaner by more than 300 lines.\nThe basic implementation of LRU always moves items to the end of the list on a cache-hit. In multi-threaded programs this means that the entire data structure must be locked on every cache-hit (which should be mostly reads), which has a negative impact on performance. In practice, two techniques are commonly used, and BIND 9 implemented both of them – slowed update and LRU segmentation. It's not rocket science. Slowed update means that when moving an item to the beginning of the list, we note the time of the last update, and then we no longer update that item on subsequent accesses within a defined time window. LRU segmentation means that we have multiple LRU lists and each item has a fixed list assigned to it based on its characteristics, e.g., in the case of a DNS cache, items are assigned to individual lists based on the domain name under which they are stored in the cache. This approach doesn't work as well in the case of hierarchical data such as the DNS tree, where DNS names higher up in the tree are accessed more often, but it works fine in combination with delayed update.\nAbove, I said that the SIEVE algorithm, when it finds an existing item (cache-hit), only marks that item as visited. This has one major positive impact on performance: in the case of a cache-hit, the entire list does not have to be locked, but only consistent access to the above-mentioned flag needs to be ensured.\nWhat does it look like in practice? I have prepared two tests of BIND 9 as a resolver, which run on real data provided by a telecommunications operator. In the first test, the resolver's memory (max-cache-size) is limited to 128 MB, which is a fairly small value, and the cache fills up quickly. The following image shows the latency of responses. We have already shown similar images in the previous article about BIND 9, but as a reminder, it is a logarithmic percentile histogram. Simply put, we want the measurement line to push as far to the left and as far down as possible, which means that the server responded as quickly as possible to as many responses as possible. The first image shows the latency in the first half of the test, when starting with an empty cache; the second image shows the second half of the test, where the cache is more or less full. It can be seen that especially in the case of a full cache, SIEVE behaves much better and the server is able to respond to more clients.\nThe next two graphs show the memory and processor usage over time. I think it is clear that the implementation of the SIEVE algorithm brought greater stability in memory usage under unfavorable conditions, and less load on the CPU.\nCurrent versions of BIND 9 use two memory-cleaning mechanisms. One is based on LRU and is only used if memory runs out. The second mechanism is more opportunistic and cleans items from the cache whose DNS TTL has expired. So what does it look like if we have enough memory for the cache and replace both of these mechanisms with just the SIEVE algorithm? As one spicy Czech saying goes: \"To think means... to know nothing,\" so we'll go back to measuring on real data again. This time I compared the existing and new implementation with the maximum cache size set to 30 GB, which did not overflow during the twenty minutes of measurement. As can be seen in the image below, even if we did not run out of memory for the cache, there is a slight improvement in the performance of the entire DNS resolver when the cache is already largely full, i.e., in the second half of the test:\nThere is no observable difference in CPU usage, but in memory usage, we see that removing TTL-based cache cleaning leads to an increase in memory consumption. This is essentially fine, because the purpose of the cache is to store data, not to discard it preventively.\nConclusion If you would like to implement SIEVE into your own systems, you can take a look at the implementation in Python, which Yazhuo Zhang presents in this blog. Alternatively, for an implementation in C, you can take a look at the Merge Request that implements the algorithm for BIND 9. The SIEVE project website lists other projects and libraries that use this new algorithm.\n","permalink":"/blogs/2025-sieve/","tags":["BIND","Open Source"],"title":"SIEVE – A Better Algorithm Than LRU?"},{"categories":["Stork","Open Source"],"contents":"\nPiotrek Zadroga and I attended the DNS Hackathon, hosted by RIPE NCC, DNS-OARC and Netnod in Stockholm on March 14-15. We proposed a project, the DNS Zone Viewer, to integrate another DNS implementation (besides BIND 9) with Stork. The goal of this project was to compare the APIs exposed by the DNS servers and the ways these APIs can be used, as well as get a sense of the complexity of integrating different authoritative servers with monitoring solutions.\nOperators diversify DNS server implementations in their networks for resiliency, security, and performance. Stork should be usable in heterogenous DNS deployments, so it is important to remove any roadblocks that make integration with third-party software hard or impossible. The timing of this investigation is also important: better to do it early, while we are just beginning integration with BIND 9 and the Stork code is not tailored to BIND.\nThe formula of the hackathon was that each person proposing a project had five minutes to introduce the project and interested folks could join the team that was formed after the introduction. Since there were many projects proposed, those that didn\u0026rsquo;t get sufficient interest did not start. Our project got sufficient interest, and we formed a team of four people: Anand Buddhdev, Aleksi Suhonen, Marcin Siodelski, and Piotrek Zadroga.\nAnand (RIPE NCC) provided a lot of good input on how they operate their K-root servers. They use BIND 9, NSD, and Knot authoritative servers. Even though we had initially planned to integrate PowerDNS with Stork during the hackathon, it made more sense to focus on NSD or Knot, since Anand was on the team. We chose NSD because Willem Toorop from NLnet Labs also participated in the hackathon and could answer some questions about NSD.\nDuring the hackathon, we successfully created a PoC for integrating NSD with Stork. It included the following steps:\nNSD daemon detection by Stork agent. Basic NSD configuration parsing to establish communication with the server. Use of the nsd-control tool to retrieve service status and configured zones. Parsing zone information to be displayed in the Stork\u0026rsquo;s zone viewer (UI). Adaptation of data models in the Stork server to accept a new type of application. Addition of a new view in the Stork UI to display the new type of application (NSD server). Here are a couple of screen shots from the hackathon:\nWhile doing this integration we made some observations that will be useful for future development of Stork\u0026rsquo;s DNS-specific capabilities.\nThe NSD daemon spawned two additional processes, each serving different zones. They looked like distinct NSD instances from the Stork perspective and thus Stork treated the processes as three different DNS servers. In fact, they all belonged to a single server instance; Anand showed us a K-root instance that spawned over 80 processes. The conclusion here was that Stork has to be able to group detected processes into instances. This can be done by comparing the configuration file location found in the process\u0026rsquo;s command line. The same location means the same instance.\nWe observed that the NSD API did not return a serial when the server was primary for the zone. We reported this issue to Willem who confirmed this behavior but also mentioned that it is going to be very hard to change because of the internals of the NSD implementation. Knowing the serial is very important for a monitoring tool like Stork when it comes to checking the propagation of zones between the primaries and secondaries. We concluded that there are two ways to overcome this problem: one is to query each zone for a serial; the second would be to parse the NSD configuration files. The former is easier and less error-prone but doesn\u0026rsquo;t scale. The latter is more complex and error-prone as it involves parsing a file with a proprietary format.\nOur third observation was that the DNS servers use different naming conventions for zone types. In the case of BIND 9 we have: primary and secondary, but also the deprecated terms master and slave. In the case of NSD we have: primary, ok (secondary zone is up-to-date), expired (secondary zone has expired), and refreshing (secondary zone has transfers active). So not only does NSD use ok to mark the secondary, but also the zone type is accompanied by the state information. We need to find a way to handle this in Stork\u0026rsquo;s zone viewer and other UI views. We want to be able to filter by the secondary keyword and find all secondary zones, but we want to allow searching by implementation-specific keywords for people familiar with NSD. The states of the secondary zones should also be displayed in the UI.\nThere are generally two types of APIs exposed by the DNS servers: REST APIs and an RPC with proprietary tools to communicate with the servers. NSD exposes an RPC and the nsd-control tool to use it. This is convenient and easy for a command-line user, but it is much harder to use for a monitoring tool like Stork. First, Stork must know where the nsd-control tool is located to execute it; second, the format of the response is proprietary and requires a dedicated parser. Conversely, the REST APIs return the data in a uniform format that can be conveniently parsed by existing tools. The proprietary tools can return different sets of data, so the parser must be prepared to handle different cases, and these cases may not be known upfront. In the particular case of NSD, we didn\u0026rsquo;t have enough time to test all possible outputs from nsd-control. This is still something to do, and will possibly result in some requests to NLnet Labs to improve their documentation in that regard.\nDuring the conversations between the team members, we also figured that it may be useful to consider several improvements in the Stork code (read on).\nFirstly, we currently assume that the DNS server is up and running when we connect it with Stork. This works in simple deployments but not necessarily in larger ones. Anand showed their K-root instance where they have BIND 9, NSD, and Knot installed on the same machine; there is only one running at a time but they sometimes need to swap. In this use case, it must be possible to register all three offline servers in Stork and then selectively run one of them.\nSecondly, Stork currently uses a Go library that lists processes and finds those that look like BIND 9 or Kea. It works in most cases, but there can always be another process or script that pretends to be BIND 9 or Kea. We may want to explore alternative ways of detecting services; since we often use systemd, it may be worth exploring what APIs it exposes to reliably detect a running process of a given kind.\nThirdly, a serial of \u0026ldquo;0\u0026rdquo; is a valid serial and is actually used quite often. Stork has to distinguish between the case when the serial is 0 and the case when the serial is unavailable. As we saw in the NSD case, the serial may be unavailable for the primary zone, but this does not imply a serial of zero; it implies that the serial is unavailable.\nDuring the hackathon, we also spent a significant amount of time discussing the DSC tool (https://www.dns-oarc.net/oarc/data/dsc). Anand uses DSC for K-root to collect statistics from all DNS implementations they use. It runs a separate daemon next to a DNS server using libpcap to capture and analyze the traffic. It has a major benefit in their environment that it is portable to any DNS solution, as it is not tied to any solution-specific API and statistics. As it is an older tool, it stores the data periodically in the XML format; however, new tools are being developed and maintained by DNS-OARC (Jerry Lundstrom) that can convert the data to the Prometheus format. It may be interesting for us to consider integrating DSC into Stork, so that Stork can manage DSC configuration, enable/disable this tool on the selected machine, and export the statistics from the monitored DNS servers into Prometheus. It is obviously orthogonal to any statistics gathered directly from the BIND 9 statistics channel or the statistics channel of any other DNS implementation we will integrate in the future.\nSpeaking of DSC, I have an idea for another DNS Hackathon topic. We could test how the statistics reported by DSC match the statistics returned by the DNS servers. I am sure there will be differences, because DNS servers may not be aware of all the traffic directed to them. Some statistics are not returned when the servers discard the traffic; for example, I heard about a case in which one of the resolvers was sending DNS replies to a DNS server that discarded them. It caused grief for a DNS server operator but was not reflected in any statistics returned by the discarding DNS server. Several people attending the hackathon wanted to work on this topic.\nThe full list of projects, along with two presentations of the results, is available on the DNS Hackathon GitHub page: https://github.com/DNS-Hackathon.\nRegards,\nMarcin Siodelski\nSenior Software Engineer, Kea and Stork\n","permalink":"/blogs/2025-dns-hackathon/","tags":["Stork","Open Source"],"title":"Report from the RIPE/OARC/Netnod DNS Hackathon"},{"categories":null,"contents":"If you think you may have found a security vulnerability in any ISC software, please take one or more of the following steps, as appropriate:\nOpen a confidential GitLab issue (preferred) or send email to bind-security@isc.org - for BIND 9-related security issues Send email to kea-security@isc.org - for Kea DHCP-related security issues Send email to stork-security@isc.org - for Stork-related security issues Emails to any of the above addresses automatically create secure, confidential issues in ISC\u0026rsquo;s GitLab instance.\nsecurity-officer@isc.org - for any other security issues* * If possible, we ask that you please encrypt your communications to the security-officer@isc.org address using the ISC Security Officer public key found on our PGP Key page. Our OpenPGP keys are also available from our FTP site.\nReport non-urgent BIND 9 issues.\nReport non-urgent Kea DHCP issues.\nReport non-urgent Stork issues.\nReport non-urgent ISC DHCP issues.\nSecurity Issues\nDNS and DHCP are critical to the Internet infrastructure. We appreciate your cooperation, assistance, and initiative in informing us of any bugs you may find in our software. If you think you may be seeing a potential security vulnerability in BIND 9 (for example, a crash with a REQUIRE, INSIST, or ASSERT failure), please open a confidential GitLab issue. For a potential security vulnerability in Kea DHCP, please report it to us immediately at kea-security@isc.org. Please do not post security vulnerabilities on a public mailing list.\nCrashes\nIf you have a crash, you may want to consult our KB article What to do if your BIND, Kea DHCP, Stork, or ISC DHCP server has crashed.\nBIND 9 Bugs/Feature Requests\nTo report a non-security-related bug or request a feature in BIND, please navigate to our GitLab instance and enter your issue there. You will have the option of marking your issue confidential, if necessary. You will need to create an account on GitLab, but you can link your credentials from another GitLab instance or social media account.\nKea DHCP Bugs/Feature Requests\nKea has an open bug database in GitLab. Before opening a new issue, please look and see if someone has already logged the bug you wish to report. You may be able to add information to an existing report, or to find a workaround or updated status on the issue that impacts you. We also track feature requests in the issue tracker, so please submit feature requests there as well. Often it is helpful to first post these requirements on the kea-users mailing list to clarify whether there is already a way to accomplish what you need.\nStork Bugs/Feature Requests\nUsers are invited to visit our Stork issues list in GitLab. Before opening a new issue, please look and see if someone has already logged the bug you wish to report. You may be able to add information to an existing report, or to find a workaround or updated status on the issue that impacts you. We also track feature requests in the issue tracker, so please submit feature requests there as well. Often it is helpful to first post these requirements on the stork-users mailing list to clarify whether there is already a way to accomplish what you need.\nISC DHCP Bugs\nISC DHCP\u0026rsquo;s open bug database migrated to GitLab in March 2019. If you previously reported an issue, you may need to re-report it in GitLab. You are welcome to log feature requests, but at this point, we are unlikely to add new features to ISC DHCP because the software has reached End-of-Life. Often it is helpful to post your issue on the dhcp-users mailing list to clarify whether there is already a way to accomplish what you need. We also recommend that ISC DHCP users consider migrating to Kea.\nSoftware Support\nBIND 9, Kea DHCP, Stork, and ISC DHCP are open source software. We provide expert professional support for our enterprise users via support subscriptions. Please contact us to find out more about purchasing a support contract and contributing to our development work.\nCommunity support is available on our public mailing lists: bind-users, kea-users, stork-users, and dhcp-users.\nDue to a large ticket backlog and an even larger quantity of incoming spam, we may sometimes be slow to respond, especially if a bug is cosmetic or if a feature request is vague or low-priority. However, we truly appreciate and depend on community-submitted bug reports and will address all reports of serious defects.\nThank you for your support!\n","permalink":"/reportbug/","tags":null,"title":"How to Submit a Bug Report"},{"categories":null,"contents":"ISC has an OpenPGP key. OpenPGP is a public key system, which means that if you have our public key and we sign a mail message (or a software distribution) using our private key, you can have a reasonable confidence level that the message or distribution really did come from us. You can learn more about OpenPGP in RFC 2440.\nIf you suspect you have found a security defect in BIND 9, Kea DHCP, Stork, or ISC DHCP, or if you wish to inquire about a security issue that you have learned about which has not yet been publicly announced, ISC encourages you to take one or more of the following steps, as appropriate:\nOpen a confidential GitLab issue (preferred) or send email to bind-security@isc.org - for BIND 9-related security issues Send email to kea-security@isc.org - for Kea DHCP-related security issues Send email to stork-security@isc.org - for Stork-related security issues Emails to any of the above addresses automatically create secure, confidential issues in ISC\u0026rsquo;s GitLab instance.\nsecurity-officer@isc.org - for any other security issues* * If possible, we ask that you please encrypt your communications to the security-officer@isc.org address using the ISC Security Officer public key found below. Our OpenPGP keys are also available from our FTP site.\nSecurity Officer (security-officer@isc.org) - PGP key to report potential general security issues\nPlease see this blog post if you are interested in our current signing procedure during the rollover period, and for releases after December 2022.\nCurrent Set of ISC Code-Signing Keys\nExpiring ISC Code Signing Key 2021 - 2022 (codesign@isc.org) - Expired on 1 February, 2023\nPrior ISC Code Signing Key 2019 - 2020 (codesign@isc.org)- Expired 31 January, 2021\nPrior ISC Code Signing Key 2017 - 2018 (codesign@isc.org) - Expired 31 January, 2019\nPrior ISC Code Signing Key 2015 - 2016 (codesign@isc.org) - Expired 31 January, 2017\nPrior ISC Code Signing Key 2013 - 2014 (codesign@isc.org) - Expired 31 January, 2015\nSupport-Staff\nPGP Keys Currently Used for Signing ISC Software pub rsa4096 2022-11-03 [SC] 706B 6C28 620E 76F9 1D11 F7DF 510A 642A 06C5 2CEC uid Michał Kępień (Code-Signing Key) \u0026lt;michal@isc.org\u0026gt; pub rsa4096 2022-11-03 [SC] D99C CEAF 8797 4701 4F03 8D63 182E 2357 9462 EFAA uid Michal Nowak (Code-Signing Key) \u0026lt;mnowak@isc.org\u0026gt; pub rsa4096 2022-11-03 [SC] 0259 A33B 5F5A 3A44 66CF 345C 7A5E 084C ACA5 1884 uid Wlodek Wencel (Code-Signing Key) \u0026lt;wlodek@isc.org\u0026gt; pub rsa4096 2022-11-03 [SC] 090A 2A07 923F 925B 5767 803A 42E5 DF78 C832 71DB uid Marcin Godzina (Code-Signing Key) \u0026lt;mgodzina@isc.org\u0026gt; pub rsa4096 2022-11-03 [SC] 9580 D6BF 2CC8 0F1E 3BB1 1252 DEAB 91D5 4B13 C9B8 uid Greg Choules (Code-Signing Key) \u0026lt;greg@isc.org\u0026gt; pub rsa4096 2022-11-03 [SC] FC87 4C3E 3FE8 6770 70AC 71BE B5EF F6AC 7E1A DDF8 uid Cathy Almond (Code-Signing Key) \u0026lt;cathya@isc.org\u0026gt; pub rsa4096 2023-04-27 [SC] DA6A 3508 E672 A49D D382 AFD9 5B8F 4D91 B88E D909 uid Andrei Pavel (Code-Signing Key) \u0026lt;andrei@isc.org\u0026gt; Verifying a source tarball with the pgp key: You will need to have the GnuPG package installed. Then download the appropriate key above, save it to a file, and import it into your own keyring:\ngpg \u0026ndash;import KEYFILE # such as KEYFILE pgpkey2015.txt Then you can verify any BIND or DHCP release by:\ngpg \u0026ndash;verify SIGFILE TARBALL For example, if you have downloaded bind-9.10.4.tar.gz and the accompanying signature file bind-9.10.4.tar.gz.sha512.asc from our downloads page:\ngpg \u0026ndash;verify bind-9.10.4.tar.gz.sha512.asc bind-9.10.4.tar.gz ","permalink":"/pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":["ISC","Company News"],"contents":"We\u0026rsquo;re back with another look at one of ISC\u0026rsquo;s staff members: Ben Scott!\nBen started with us as a support engineer in December 2024 and has been quickly getting up to speed; if you\u0026rsquo;re an ISC support customer, you may already have had the pleasure of interacting with him. So far, he\u0026rsquo;s been working hard to learn all the ins and outs of our software and the related protocols.\nAfter studying computer science as an undergraduate, Ben started his career in support working at a small software company as a help-desk technician. He then joined an IT consulting firm, where he worked with a wide variety of customers in different industries, using different technologies, and facing different issues. He eventually became the IT manager for a small, family-owned manufacturing company, where he helped build their IT operations essentially from the ground up. After that company was acquired, he spent several years doing consulting and contract work before coming to ISC.\nBen was happy to join our team because he recognizes the value of open source. He says, \u0026ldquo;I find Free/Open Source Software (FOSS) is often technically superior to proprietary offerings. I suspect because profit is less of a driving force, FOSS sees less fluff added, and certainly less stuff like data mining. There\u0026rsquo;s less motivation to tie everything into an unwieldy mess just to sell a package.\u0026rdquo;\nHe adds, \u0026ldquo;The lack of licensing costs and limits is also a big plus. Yes, this means one often has to pay extra for support, but that is completely normal in the world of proprietary software as well. And the freedom provided by FOSS is also significant: anyone can fork the software and add their improvements or fixes. When the software is owned by the community, the whole community benefits from enhancements.\u0026rdquo;\nBen\u0026rsquo;s far-ranging educational and work experience is serving him well at ISC so far, and he loves that he\u0026rsquo;s encouraged to continue learning every day. \u0026ldquo;Most of my education has come from innumerable books, manuals, FAQs, e-mail discussion lists, web forums, user groups, and yes, even the occasional class. The open nature of the Internet and Free/Open Source Software has helped me immeasurably, both throughout my career and here at ISC so far.\u0026rdquo;\nBen enjoys the challenge of ISC customer support: \u0026ldquo;I solve problems for a living. Whether it\u0026rsquo;s a customer who has a question about how to best use the software, a document that needs updating, or a bug that needs to be tracked down, it\u0026rsquo;s all about solving problems. It\u0026rsquo;s equal parts technical knowledge, detective skills, and communications. In the process, I get to work with some of the foundational protocols of the Internet, and a variety of customers and scales of operation. I\u0026rsquo;m not worried about getting bored!\u0026rdquo;\nBen says his favorite part of working here at ISC so far is his coworkers. \u0026ldquo;Everyone here has been welcoming and helpful from the start. It\u0026rsquo;s a crowd of fiendishly-smart people who are all willing to share their knowledge and collaborate. Mistakes are treated as mistakes and not character flaws. There is a strong sense of camaraderie, which is not always the case in remote-work teams. Managers listen and clearly have everyone\u0026rsquo;s best interests in mind. It\u0026rsquo;s like a dream come true.\u0026rdquo;\nAnd it\u0026rsquo;s our good fortune to have him as part of the ISC family!\n","permalink":"/blogs/isc-profile-scott/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer – Ben Scott!"},{"categories":["ISC","Open Source","Stork"],"contents":"ISC\u0026rsquo;s Senior Software Engineer Marcin Siodelski is attending the upcoming DNS Hackathon in Stockholm, sponsored by Netnod, DNS-OARC, and the RIPE NCC. For more information about how to partipate in the Hackathon, see this blog post. Applications are on-going, and even if you can\u0026rsquo;t travel to Stockholm, you can participate remotely via chat.\nMarcin will be working on the Stork DNS Zone Viewer, specifically, attempting to retrieve and view DNS zones from PowerDNS, leveraging the DNS standards (IXFR, AXFR, notify) to support a cross-platform monitoring tool. Today, Stork manages Kea DHCP systems, and has a very nascent integration with BIND 9 DNS. In 2025, we plan to extend Stork to also provide some DNS management capabilities.\nMarcin is hoping other participants, and particularly PowerDNS users, will join in and help with this open source effort.\n","permalink":"/blogs/2025-dns-hackathon/","tags":["Open Source","DNS"],"title":"Stork DNS Project at the DNS Hackathon"},{"categories":["ISC","Open Source","DNS"],"contents":"The US Executive Order \u0026ldquo;Strengthening and Promoting Innovation in the Nation\u0026rsquo;s Cybersecurity\u0026rdquo;, published on January 17, 2025, calls for a number of measures to strengthen cybersecurity in the US. These include many generally accepted best practices, such as publishing ROA (Route Origin Authorization) records, paying attention to BGP leaks, and source address validation. The document also includes a number of recommendations for encrypting communications, including voice, video and \u0026hellip; DNS.\nThe call for encrypting Internet traffic was a bit of a surprise, coming from the US government, but it apparently reflects the realization that there are pervasive network penetration and surveillance efforts across the Internet and that increasingly, even government communications on presumably protected networks may be vulnerable to these actors.\nSec. 4, Securing Federal Communications, of the Executive Order reads, in part:\n(c) Encrypting Domain Name System (DNS) traffic in transit is a critical step to protecting both the confidentiality of the information being transmitted to, and the integrity of the communication with, the DNS resolver.\n(i) Within 90 days of the date of this order, the Secretary of Homeland Security, acting through the Director of CISA, shall publish template contract language requiring that any product that acts as a DNS resolver (whether client or server) for the Federal Government support encrypted DNS and shall recommend that language to the FAR Council. Within 120 days of receiving the recommended language, the FAR Council shall review it, and, as appropriate and consistent with applicable law, the agency members of the FAR Council shall jointly take steps to amend the FAR.\u0026quot;\nThis means that new US government procurement will include a requirement for encrypted DNS.\n(ii) Within 180 days of the date of this order, FCEB agencies shall enable encrypted DNS protocols wherever their existing clients and servers support those protocols. FCEB agencies shall also enable such protocols within 180 days of any additional clients and servers supporting such protocols.\nPossibly, agencies that haven\u0026rsquo;t yet may begin enabling encrypted DNS where it is available in existing systems.\nWhat are the DNS encryption options? Prior to the development of DNS encryption, users seeking privacy on the Internet used either VPNs or an onion, like the Tor network, to shield all their Internet traffic from view until it could be combined with and obscured by the larger volume of other user traffic. In the past five years, several encryption options that do not require a tunnel have been developed specifically for DNS traffic.\nThe ICANN SSAC produced a report that compared the two major DNS encryption technologies specified by the IETF: DNS over HTTPS (DoH), and DNS over TLS (DoT). DoT encrypts the stream from the stub resolver to the recursive resolver, but not from the recursive resolver to the authority. The logic is that there is enough traffic coming from the recursive resolver that the individual DNS queries are lost in the larger traffic flow. DoH was sponsored by cloud providers, again to encrypt traffic from clients to the resolver, but DoH is implemented in the browser. (There is a subsequent proposal for authentication to the authoritative server, called ADOT, which is not yet standardized, nor is it specified in this US Government EO.)\nIn this crude diagram, the (xxxxx) represents the encrypted portion of the DNS exchange.\nDoT [client browser] \u0026mdash; [stub resolver] (xxxxx) [resolver] \u0026mdash; [authoritative server]\nDoH [client browser] (xxxxx) [resolver] \u0026mdash; [authoritative server]\nIn addition to protecting individual queries and the privacy of clients making those queries, encryption can also be used to protect zone transfers. In some applications, zone names may themselves include confidential information, such as client identity. The IETF standard for this is RFC 9103, referred to as \u0026ldquo;XoT\u0026rdquo;, for DNS Zone Transfers over TLS. This is not specifically mentioned in the recent Executive Order.\nDNS Encryption With BIND 9 Since version 9.18, BIND supports both DoH (DNS over HTTPS) and DoT (DNS over TLS). The newest transport technology, DNS over QUIC (DoQ) is in active development on our BIND 9.21 branch. BIND 9.20 also supports the PROXYv2 protocol over both DoH and DoT.\nCISA published \u0026ldquo;Encrypted DNS Implementation Guidance\u0026rdquo; back in April of 2024. This document recommends, in the case of BIND, use of DNS over HTTPS, as well as zone transfers over TLD (XoT). It also discusses the use of a proxy to terminate TLS on a separate machine within the protected core network.\nLimitations of DNS Encryption DNS encryption alone does not provide a complete privacy solution, if, as is very often the case, the DNS lookup is followed by an HTTP connection. This has been documented by many observers. One excellent presentation that illustrates the issues is \u0026ldquo;What Can You Learn from an IP?\u0026rdquo;, given at ANRW \u0026lsquo;19 by S. Patil \u0026amp; N. Borisov. This problem is being addressed, with the development of ESNI (encrypted server name indicator) and ECH (encrypted client hello), but it is worth noting.\nImpact of Encryption on Operations Besides the overhead that encryption places on the DNS server (one reason why some operators prefer to offload the TLS processing to a separate box in very busy deployments), the biggest objection to encrypting the DNS is that can make it far more difficult to identify and block abuse in the DNS, as well as complicating troubleshooting.\nSome Reading on DNS Encryption ICANN SSAC SAC109: The Implications of DNS over HTTPS and DNS over TLS\nCISA Encrypted DNS Implementation Guidance\nDoH Support in BIND 9\nDNS Privacy at IETF 104, a blog post by Geoff Huston\nCentralized DOH is bad for privacy, in 2019 and beyond, a RIPE Labs blog post by Bert Hubert\nSecurity Control Changes Due to TLS Encrypted ClientHello, another RIPE Labs blog post by Kathleen Moriarty\n","permalink":"/blogs/2025-US-EO-encrypteddns/","tags":["Open Source","DNS"],"title":"Recent US Executive Order Calls for Encrypting DNS"},{"categories":["ISC","Annual Reports","Company News"],"contents":"ISC\u0026rsquo;s Organizational Health and Operations Financially and organizationally ISC is in good health, with no major concerns. Revenues in 2024 were strong, nearly $7.7M, which was enough to cover development expenses for our BIND and Kea programs, as well as to fund our overhead, F-Root operations, and Stork development, which didn\u0026rsquo;t generate any revenue. ISC ended 2024 with 45 staff members, over half of whom are software engineers. The BIND team consists of 16 engineers, with six of these focused on QA and release operations. The combined DHCP/KEA and Stork team has 10 software engineers, including three focused on QA and release operations. Three engineers manage the F-Root operations, and some of our internal computing infrastructure. We have seven support engineers, who take turns providing on-call coverage nights and weekends. This leaves five people in our sales and marketing departments and four in general and administrative (G\u0026amp;A).\nISC currently has only a single layer of management, which limits our ability to grow further, because we don\u0026rsquo;t want to add another layer. We are content with our current size, because we feel we have an effective and efficient structure and a pleasant work environment. Everyone at ISC works remotely, and we meet in person twice a year for dedicated working meetings, as well as occasionally at technical conferences.\nISC Core Programs BIND 9 In 2024 we hired three new BIND engineers, bringing the team to a total of nine developers, five QA staff and two managers (Director of DNS Engineering and BIND 9 QA Manager). We normally release two stable and one development version of BIND each month. We postponed the January 2024 BIND releases and skipped June and November, but still issued a total of 25 open source releases, as well as 12 Supported Preview (-S) versions - plus package versions of all of them.\nWe released BIND 9.20, a new stable version that completed the transition to new libuv-based event loops, begun with BIND 9.16 and continued in 9.18. We had received reports that some long-duration tasks, like updating statistics, handling transfers, and similar system work, seemed to be blocking query resolution in very busy systems, so we added specialized thread pools to offload long-duration tasks. This was our first stable version with a new database infrastructure, qp-trie, replacing a red-black tree for functions requiring a database, including the zone database, server database, and resolver cache. The transition to libuv and qp-trie were major refactoring projects, involving multiple developers for many months, and requiring extensive testing to discover any performance impacts. The result is a system that scales better on modern platforms.\nThe DNSSEC signing system has received a major update, and now uses the DNSSEC Key and Signing Policy (KASP) system for managing signed zones. The BIND team added more extended error codes and zone transfer statistics, updated our catalog zones implementation, and implemented the ProxyV2 protocol. ISC staff helped to incubate the Deleg proposal, which is now a new IETF working group, working on a standard for providing more information about the authoritative servers for a zone.\nWe evaluated, mitigated, and published eleven BIND CVEs, several of which were DNS-wide multi-vendor issues at the protocol level, requiring extensive coordination with other parties. These vulnerabilities take longer to fix and publish, because of the overhead of coordinating with other teams. Historically, most of our CVEs have been assertion failures, but lately there has been a lot of research into overloading different elements of the DNS, resulting in a number of CVEs that can exhaust resources. Some of the mitigations will necessarily require placing limits on the size or number of records BIND will process, which may end up requiring configuration changes for some users. CVE-2023-4408: Parsing large DNS messages may cause excessive CPU load CVE-2023-50387: KeyTrap - Extreme CPU consumption in DNSSEC validator CVE-2023-50868: Preparing an NSEC3 closest encloser proof can exhaust CPU resources CVE-2023-5517: Querying RFC 1918 reverse zones may cause an assertion failure when nxdomain-redirect is enabled CVE-2023-5679: Enabling both DNS64 and serve-stale may cause an assertion failure during recursive resolution CVE-2023-5680: Cleaning an ECS-enabled cache may cause excessive CPU load CVE-2023-6516: Specific recursive query patterns may lead to an out-of-memory condition CVE-2024-0760: A flood of DNS messages over TCP may make the server unstable CVE-2024-1737: BIND’s database will be slow if a very large number of RRs exist at the same name CVE-2024-1975: SIG(0) can be used to exhaust CPU resources CVE-2024-4076: Assertion failure when serving both stale cache data and authoritative zone content As an open source project, anyone can open an issue in our repository, and many people do. The BIND team has a generous backlog of (as of this minute) 613 open issues. They typically close between 25 and 50 per monthly release, so there are plenty to choose from. Of the open issues, 96 are labeled as bugs, 132 are feature requests, 22 are labeled documentation issues, and 70 are to-do items related to tests.\nOur bind-users mailing list has continued to be busy and to provide helpful advice. We currently have 2408 subscribers to bind-users. We encourage all users to subscribe to the very low-traffic bind-announce mailing list, where we announce new releases and feature deprecation. Current subscribers to bind-announce: 3459.\nKea DHCP We plan to post a separate blog going into more details on the many accomplishments in the Kea project in 2024, but here are a few highlights:\nWe published 12 Kea releases, including a new stable version, Kea 2.6.\nWe created the Kea migration page on our website, put up a live Kea migration utility, and released a packaged version of the KeaMA utility.\nWe have seen Kea adoption and deployment expanding into a wider range of enterprise environments; our early adopters tended to be access providers (ISPs). Questions about ISC DHCP migration continue, and probably will for several more years.\nThe Kea project is very busy and also has a healthy number of open issues, 690 at this writing. The monthly development releases typically resolve 20-35 issues. Of the open issues, 44 are labeled as bugs and 137 are enhancements or feature requests. The kea-users mailing list is growing, and has provided a lot of users with configuration help this past year. At this writing the list has 558 members.\nStork Stork came a long way in 2024, breaking out from a read-only monitoring system to provide comprehensive configuration control for Kea. We released eight versions, including Stork 2.0, issued in November. With that release, we began offering professional support for Stork, and included it under our ISC Software Support and Security Vulnerability Disclosure policy documents. We launched a live demo site for Stork (demo.stork.isc.org) to let prospective users try it out with minimal effort. Our marketing team is thrilled to finally have a product with interesting screen shots!\nWe have ramped up our efforts to track potential vulnerabilities in the Stork dependencies, because the web ecosystem tends to have relatively frequent issues. We even published our first Stork CVE:\nCVE-2024-28872: Incorrect TLS certificate validation can lead to escalated privileges\nF-Root ISC added new F-Root sites in Belgrade (Serbia), Pavlodar (Kazakhstan), Lviv (Ukraine), and San Pedro Sula (Honduras), and replaced the equipment in Warsaw (Poland), with thanks to our site sponsors.\nISC\u0026rsquo;s Support Customer Base We have literally no idea how many users there are of our software, but we have frequent, excellent communications with our support customers. ISC\u0026rsquo;s technical support contracts fund all the rest of our operations, including the development and maintenance of our open source. 2024 was a good year for our technical support service.\nWe hired two new support engineers, which required interviewing dozens of candidates. These additions brought our support team up to a total of seven engineers, including the Director of Technical Support (who is also a capable engineer, of course).\nIn July, we migrated all our support customers and their open tickets from a large commercial support system back to our old open source ticketing system, Best Practical\u0026rsquo;s Request Tracker. It turned out that our support customers preferred the email and text interface of the older system to the vastly fancier, and more complicated, commercial system.\nWe implemented a new process for publishing Advance Security Notifications to our customers, using our ticketing system. We heard that some of our support customers don\u0026rsquo;t follow our announcement mailing lists, so we added announcement channels in our ticketing system.\n2024 saw a significant increase in requests for assistance in migrating from ISC DHCP to Kea, both on the mailing lists and among our support customers. Support Customers As of the end of 2024, we have 187 total customers with Basic, Enterprise, or OEM support agreements that extend into 2025. 88 of these customers have BIND support contracts, and 95 have Kea and/or ISC DHCP contracts. 144 of our customers were returning from prior years, while 43 were new to us. This is a net gain of 34 more support customers than we had at the start of 2024, so we more than replaced the few customers who did not renew. We have a total of 211 ongoing support contracts, because many customers have support from us for multiple products. Support Customers by Region: North America - 101 Europe - 57 Asia-Pacific - 19 South America - 5 Other - 5 (includes Middle East, Africa, and India) In addition, we entered 2025 with 43 subscribers for Kea Premium whose subscriptions extend beyond 2024. These customers are self-supporting, with the help of the public kea-users mailing list. ISC\u0026rsquo;s Knowledgebase We published or updated 69 articles, including 12 new CVE advisories. New articles cover such topics as: a Stork quickstart guide, Stork LDAP authentication, private networks and split DNS, RRset limits in zones, redefining standard options, exempting broken domains in recursion, altering the subnet mask option based on giaddr, the Umbrella feature in detail, and a brief introduction to LDAP.\nOur top 10 most-read articles in 2024 were:\nBIND 9 Security Vulnerability Matrix ISC DHCP 4.1 Manual Pages - dhcpd.conf BIND Logging - some basic recommendations ISC DHCP 4.4 Manual Pages - dhcp-options ISC Packages for BIND 9 CVE-2023-50387: KeyTrap - Extreme CPU consumption in DNSSEC validator CVE-2023-50868: Preparing an NSEC3 closest encloser proof can exhaust CPU resources ISC DHCP 4.4 Manual Pages - dhcpd.conf Understanding views in BIND 9, with examples Using Official ISC Packages for Kea Several years ago we put the ISC DHCP man pages into documents in the KB, and ever since they have been astonishingly popular. There are fewer articles on Kea in our Knowledgebase, and fewer views, in part because the Kea ARM is quite comprehensive and provides more detailed configuration advice than the BIND ARM. BIND is the most frequently read category, with nearly 168,000 views of those articles.\nThe top searches in the KB were for \u0026ldquo;windows\u0026rdquo;, \u0026ldquo;logging\u0026rdquo;, \u0026ldquo;cve-2023-50387\u0026rdquo;, \u0026ldquo;failover\u0026rdquo;, \u0026ldquo;ipv6\u0026rdquo;, \u0026ldquo;dnssec\u0026rdquo;, \u0026ldquo;rndc\u0026rdquo;, \u0026ldquo;next-server\u0026rdquo;, \u0026ldquo;ddns\u0026rdquo;, \u0026ldquo;CVE-2023-50868\u0026rdquo;, and \u0026ldquo;Docker\u0026rdquo;. We ended support for BIND on Windows several years ago, but people continue to look for it.\nISC Contributions to External Open Source Projects and Community Programs ISC encourages staff to participate in the Internet infrastructure technical community, and several currently have significant roles outside of ISC.\nDNS-OARC ISC is an organization member of OARC. Ray Bellis is a member of the Board of Directors of DNS-OARC. Cathy Almond is the Chair of the Program Committee of DNS-OARC. NANOG T. Marc Jones is a member of the NANOG Community Engagement Committee.\nICANN Jeff Osborn is Chair of the Root Server System Advisory Committee (RSSAC). Rob Carolina is ISC\u0026rsquo;s Alternative Representative on RSSAC. Ondřej Surý is a Trusted Community Representative and Recovery Key Share Holder for the DNS Root Zone. IETF Petr Špaček is ICANN Liaison for IETF, and a member of the expert reviewer team for the DNS Directorate. While ISC staff participated in reviews and discussions of numerous IETF documents, only one document authored by an ISC staff person reached RFC status in 2024: DHCPv6 Options for the Homenet Naming Authority, co-authored by Tomek Mrugalski. Open Source Contributions to Non-ISC Projects Our staff are welcome and encouraged to contribute to external projects. These are some of our code contributions.\nMichał Kępień\nhttps://github.com/libgit2/pygit2/pull/1282 https://github.com/openwrt/openwrt/pull/13453 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111613 https://github.com/libgit2/libgit2/pull/6973 Michal Nowak\nhttps://github.com/libuv/libuv/issues/4584 https://github.com/rthalley/dnspython/issues/1055 https://github.com/rthalley/dnspython/issues/1034 Sławek Figiel\nhttps://github.com/shirou/gopsutil - Fix a bug occurring on BSD-like systems https://github.com/goreleaser/nfpm - Enable environment variable substitution for a configuration option Tomek Mrugalski implemented DNR option sending in Wireshark; his patch was submitted upstream. He also co-authored an IETF draft that was standardized in 2024, IETF RFC 9526.\nOndřej Surý led the effort to update the PHP packages in next Debian stable (trixie) to PHP 8.4, and runs the highly successful https://deb.sury.org project that provides multiple PHP version packages for Debian and Ubuntu.\nRay Bellis authored an IETF draft that was standardized in 2024, IETF RFC 9619.\nISC sponsored work in the libuv project, to add the uv_udp_try_send2() function.\nOther Community Engagement ISC published 18 blogs in 2024.\nISC staff delivered at least eight presentations at community events. ISC abandoned Twitter/X a long time ago: in 2024 we added a Bluesky account to our social media presence.\nISC did not give any webinars in 2024. Our usual presenter was unavailable, and the viewership for these events had declined over the years, giving us the impression that these were no longer worth the effort. ISC sponsored an information table at all three 2024 NANOGs, sponsored a scholarship program for All Things Open for local user groups, and contributed to the 2024 BSDCan.\n","permalink":"/blogs/2024-accomplishments/","tags":["Open Source","DNS"],"title":"Looking back on ISC's 2024 accomplishments"},{"categories":["ISC","Open Source","Events"],"contents":"Here are some of the events ISC staff plan to participate in during the first half of 2025. We would love for any of our users to come and talk to us about their experiences with our software, or to share suggestions or ideas with us.\nEvent Location Start date End date Comments Pacific Telecommunications Council Honolulu, HI 2025-01-19 2025-01-22 Jeff Osborn (ISC\u0026rsquo;s President), T Marc Jones (Director, Sales) and Eddy Winstead (Senior Sales Engineer) attending CSNOG 2025 Zlín, Czechia 2025-01-21 2025-01-22 Petr Špaček, BIND Software Engineer, is presenting. EU Open Source Policy Summit Brussels 2025-01-31 2025-01-31 Notable event, ISC participating remotely. FOSDEM Brussels 2025-02-01 2025-02-02 Ondřej Surý, Director of DNS Engineering, is giving a talk on BIND in the DNS room. NANOG 93 Atlanta, GA 2025-02-03 2024-02-05 ISC is sponsoring. Look for us at the ISC table on Monday afternoon. DNS-OARC 44 Atlanta, GA 2025-02-06 2025-02-07 Three ISC staff are presenting. NTCA RTIME San Antonio, TX 2025-02-23 2025-02-26 ISC is exhibiting. Look for us at the ISC table! OpenSSF Policy Summit Washington, DC 2025-03-04 2025-03-04 Vicky Risk, Product Manager attending. DKNOG 15 Copenhagen, DK 2025-03-06 2025-03-07 An ISC support engineer is attending. ICANN 82 Community Forum Seattle 2025-03-08 2025-03-14 Several ISC staff will be attending. IETF 122 Bangkok, TH 2025-03-15 2025-03-21 Several ISC engineers will attend. RIPE/Netnod DNS Hackathon Stockholm, SE 2025-03-15 2025-03-16 Marcin Siodelski will be organizing work on a DNS zone viewer Netnod Spring meeting Stockholm, SE 2025-03-18 2025-03-19 Marcin Siodelski, Senior Engineer on the DHCP \u0026amp; Stork team is attending. PLNOG 2025 Kraków 2025-05-12 2025-05-13 Engineer from DHCP/Stork team attending. RIPE 90 Lisbon 2025-05-12 2025-05-16 Several ISC staff attending. CENTR Jamboree Lyon, France 2025-05-21 2025-05-23 Vicky Risk, Product Manager attending. NANOG 94 Denver, CO 2025-06-09 2025-06-11 Several ISC staff attending. ICANN 83 Policy Forum Prague, CZ 2025-06-09 2025-06-12 Jeff Osborn and Rob Carolina will participate. Linux Foundation OSS Summit Denver, CO 2025-06-23 2025-06-25 Vicky Risk, Product Manager, attending. ","permalink":"/blogs/2025-meet-us/","tags":["Open Source","Events"],"title":"Where to meet ISC in person"},{"categories":["ISC","Open Source","DNS"],"contents":"At the ICANN 81 meeting in Istanbul on 10 November 2024, I gave a presentation about the DNS Root Server System, in an effort to increase understanding of the Root Server System (RSS) and Root Server Operators (RSOs). The talk was intended for the members of the ICANN Governmental Advisory Committee (GAC), but much of this explanation may be of interest to general audiences.\nThe Role and Purpose of the Domain Name System (DNS) DNS uses human-readable names - commonly called domain names - to find numerical computer addresses. Humans can remember and understand names like www.amazon.com, while computers need IP addresses like 18.239.62.181. DNS is what tells us that www.amazon.com is at 18.239.62.181. The numbers can and do often change, while the human-readable names stay the same.\nConnected devices on the Internet - computers, phones, printers, refrigerators, and so on - need DNS to be able to find other connected devices. When your smart fridge wants to send an alert to your phone to tell you you\u0026rsquo;re out of milk, that requires the DNS. But how do all the devices know where to find each other? Each device is able to ask the DNS questions about domain names, and the answers are IP addresses.\nWhat are the benefits of DNS? The obvious one is that names are much easier for humans to remember than strings of numbers. But equally important is that the service becomes very portable; the addresses/hardware/platform/location/anything else can be changed, but as long as the name stays the same it will still be findable. The DNS is also a huge distributed network that\u0026rsquo;s remarkably easy to use. It is a flexible, delegated database that includes hundreds of millions of directories arranged in what is arguably the world\u0026rsquo;s largest distributed database.\nSo that\u0026rsquo;s DNS in a nutshell, but obviously it\u0026rsquo;s significantly more complicated in practice.\nWhat are the Roles of an Address Resolver and an Authoritative Server? Devices get addresses from address resolvers, of which there are millions in the world. Resolvers can find and read what we might think of as the Internet\u0026rsquo;s \u0026ldquo;phone books,\u0026rdquo; which are actually authoritative servers that are held by organizations that each manage a portion of the Internet. Each of these authoritative servers contains the zone content, or address information, for all of the domains it controls.\nTo put it simply, DNS consists of devices asking questions like \u0026ldquo;What is the number for www.amazon.com?\u0026rdquo; and receiving the response \u0026ldquo;The number for www.amazon.com is (at the moment) 18.239.62.181.\u0026rdquo; These types of questions happen about 500 trillion times per day, and are answered in milliseconds by the resolvers.\nNormally, the resolvers remember what information they\u0026rsquo;ve asked for from the authoritative servers, and they hold that information for future queries. This is called \u0026ldquo;caching.\u0026rdquo; But sometimes, the resolver needs to learn a new number or confirm an old one.\nLet\u0026rsquo;s step through the layers of the process and see how DNS works for the fictitious location www.example.com, depending on how much information a resolver needs:\nCase 1 (the most common scenario): The resolver can construct the entire answer it needs using only its cached memory, so it doesn\u0026rsquo;t need to ask anyone. Case 2: The resolver has cached information about example.com, so it asks only the domain name\u0026rsquo;s authoritative server about where to find www. Case 3: The resolver doesn\u0026rsquo;t have information about www or example, but it knows where to get information about .com, a Top-Level Domain (TLD). It asks the TLD\u0026rsquo;s authoritative server for the location of example.com, and then that domain name\u0026rsquo;s authoritative server for the IP address of www. Case 4: If the resolver is brand new and has no information cached in its memory, it needs to begin filling its memory cache. It starts by querying the Root Server System (RSS) to find out where to get information about .com, then asks the TLD authoritative server about example, and then queries the domain name\u0026rsquo;s authoritative server for the location of www. More than 90% of answers fall under Case 1, where the resolver has the final IP address in its cached memory. Approximately 5% of queries fall in Case 2, and approximately 2% fall into Case 3. Only one of every 5,000-10,000 queries, or about 0.02% of the total number of IP address requests, requires a question to the RSS.\nHow, When, and Why a Resolver Consults the RSS The sole task of the RSS is to point queries from resolvers to the authoritative servers of all the Top-Level Domains on the Internet.\nThinking of the DNS in layers may help clarify it a bit:\nThere are three \u0026ldquo;layers\u0026rdquo; of things that a DNS query might ask, working in order from left to right in an Internet address (like www.example.com): what we call domain name zones, TLD zones, and the Root Zone.\nThere are estimated to be 350,000,000 different domain names on the Internet, like amazon.com, or isc.org, or royal.uk; they are maintained by a variety of domain registrars throughout the world. There are only about 1450 Top-Level Domains, like .fr or .edu; each one may have between 1000 and 10,000,000 domains within it. The TLDs are maintained by different TLD registries that fully control the domains under those TLDs. The Root Zone, by comparison, is very small. It is one document, containing a list of the 1450 TLDs and an address for each one. It is maintained by the Internet Assigned Numbers Authority (IANA) and cryptographically secured by the Root Zone Maintainer (RZM). To recap:\nA root server holds a copy of the Root Zone. The Root Zone holds the addresses of the 1450 TLDs, such as .com, .nl, .jobs, etc. A TLD\u0026rsquo;s authoritative server knows the address for all domain names under it, such as all addresses that end in .com (like tiktok.com or amazon.com), all addresses that end in .nl (like google.nl or amsterdam.nl), or all addresses that end in .jobs (like tech.jobs or highpay.jobs). A domain name\u0026rsquo;s authoritative server knows the addresses for the specific servers in its domain, like www.amazon.com or mail.amazon.com or info.amazon.com. The resolver on each device finds and returns the answer when the user wants it. In the millisecond world of a resolver, queries to the Root Server System are rare.\nRSS: What It Is and What It Isn\u0026rsquo;t Let\u0026rsquo;s look a little more closely at the Root Server System itself.\nThe RSS provides address information, not content. The RSS answers one small part of an address question: \u0026ldquo;Can you give me the address of an authoritative server where I can look up the addresses of the Top-Level Domains?\u0026rdquo; The RSS does NOT offer content; it does not host websites or email or any other content, and it does not transmit or deliver any Internet content.\nTakeaway: The Root Server System does not manage or carry any Internet content.\nThe RSS is not a \u0026ldquo;gatekeeper\u0026rdquo; to the Internet. It answers questions posed by address resolvers, in those rare instances when the address resolvers don\u0026rsquo;t already have the address answers in their cached memory. Takeaway: Internet traffic is almost always transmitted without the need to interact with the Root Server System.\nThe RSS is stable, secure, and resilient. From a technological standpoint: The RSS consists of more than 1800 globally distributed server instances, making it massively redundant. Each server instance holds 100% of the Root Zone information, and all these servers feature diverse hardware platforms, operating systems, DNS applications, and data routing.\nTakeaway: The Root Server System has no single point of technological failure.\nFrom an institutional standpoint: The RSS is jointly operated by 12 autonomous Root Server Operators (RSOs) around the globe. Each RSO is independent of the others, yet they continuously collaborate with each other. A force majeure event suffered by one RSO (such as a court injunction) has no operational impact on the others.\nTakeaway: The Root Server System has no single point of institutional failure.\nThe RSS has operated since the 1980s and has never suffered a service blackout, although many online attackers have tried. The diversity of the system is its strength. Takeaway: The Root Server System has a history of nearly 40 years of successful 24x7x365 operation.\nRoot Server Operators do not decide what appears in the Root Zone; they are simply a reliable, authenticated delivery method. A Registrant (the domain name holder) decides address information for its own domain and provides their authoritative service address to the TLD registry. A TLD registry decides its authoritative server addresses and provides those to IANA. IANA authenticates all revisions to TLD authoritative server addresses and provides them to the Root Zone Maintainer (RZM). The RZM cryptographically signs the Root Zone and provides it to the Root Server Operators and the world. Takeaway: The Root Server System serves the TLD addresses provided by the TLD, IANA, and RZM.\nThe Root Server System underpins the Domain Name System, but actual queries to the RSS in normal Internet operations are extremely rare.\nWe hope this information has been useful. To watch a recording of my full presentation or download the slides, please visit https://www.isc.org/presentations/.\n","permalink":"/blogs/root-server-system/","tags":["Open Source","DNS","Root Server System"],"title":"Why Most People Haven't Heard of the DNS Root Server System"},{"categories":["BIND","DNS","DHCP","Open Source","Company News","ISC"],"contents":"ISC is fortunate to have staff members in so many different countries around the world: our software development benefits from all the different perspectives - and we benefit personally! But there is one area in which we will always be hopelessly divided.\nWe hope our customers, partners, and users can set aside all divisions and focus only on joy this holiday season, and we wish you all the best in 2025. Thank you for your ongoing support of open source software for an open Internet!\n","permalink":"/blogs/2024-happy-holidays-from-isc/","tags":["holidays","2024","ISC"],"title":"Happy holidays from ISC!"},{"categories":["ISC","Company News","Kea","Stork\""],"contents":"It’s been a while since we\u0026rsquo;ve profiled one of our engineers, so we\u0026rsquo;re overdue. But this isn\u0026rsquo;t one of our new hires: it\u0026rsquo;s time to get to know a little more about Tomek Mrugalski, our Director of DHCP Engineering!\nWhen Tomek Mrugalski started working at ISC in 2011, he was asked a seemingly simple question: \u0026ldquo;You wrote one DHCP server already. Can you write another one, but better?\u0026rdquo; He answered \u0026ldquo;Yes!\u0026rdquo; and has spent the last 13+ years following through on that answer.\nTomek, who joined ISC after working at Intel for seven years, holds a Master\u0026rsquo;s degree in computer science, a Master\u0026rsquo;s in space and satellite technologies, and a Ph.D in telecommunications, and calls himself a \u0026ldquo;networking geek.\u0026rdquo; His family has been living in his home town of Gdańsk, Poland for generations; all three of his graduate degrees are from Gdańsk University of Technology.\nHis MSc topic, \u0026ldquo;DHCPv6 client and server implementation for Linux,\u0026rdquo; led Tomek to create the Dibbler open source project. (From the very beginning, the code was available in a CVS that he self-hosted, long before GitHub was even founded.) He kept improving his work, and eventually received an invitation to a DHCPv6 \u0026ldquo;bakeoff\u0026rdquo; in Amsterdam where he met many other DHCPv6 enthusiasts, including some ISC staff members. Later, he was invited to join the IETF\u0026rsquo;s work on the DS-lite draft, which eventually became RFC 6334, the first of 14 RFCs that Tomek has authored or co-authored.\nTomek joined ISC as a software engineer, tasked with writing a replacement for the aging ISC DHCP software. The project was initially intended to create modules in a new BIND 10 framework, but when that project was cancelled the DHCP modules quickly became independent.\nTomek is an avid traveler: he has visited over 50 countries on six continents. He also loves nature and enjoys giving shoutouts to some of his favorite animals in his software projects. His first DHCP software was called Dibbler, named after a small marsupial that lives in Australia, and his Ph.D simulator software was called Numbat (another Australian creature). So of course when it came time to name the new DHCP server, he called it Kea after the New Zealand parrot.\nAfter a few years, Tomek was asked to manage the whole DHCP team, which now includes 10 other engineers. Along the way, Tomek came up with a proposal to address what he felt was the biggest shortcoming of our BIND 9 and Kea DHCP software: the lack of an easy-to-use graphical interface. This led to our newest software project, Stork.\nKea is very popular already and Stork is quickly growing in popularity as a management dashboard for Kea; Tomek\u0026rsquo;s goal to make both of those projects even better in the future. The major challenge that he and his team have just started tackling is to expand Stork to be able to interact with and manage BIND, and DNS in general. He says: \u0026ldquo;It\u0026rsquo;s both exciting and scary. DNS is a very different world compared to DHCP. I\u0026rsquo;m optimistic, though.\u0026rdquo;\nAlthough Tomek started working in the closed source world of Intel, he learned a lot of good testing and software development practices there that he has been able to bring to ISC. He likes the philosophy of open source: \u0026ldquo;Whatever you\u0026rsquo;re trying to do, you don\u0026rsquo;t need to start from scratch; you can take existing code, extend it to do whatever you want, and hopefully give it back to the community, so others can take it even further.\u0026rdquo;\nTomek enjoys the freedom he gets from working at ISC. \u0026ldquo;When there\u0026rsquo;s something important that needs to be done, nobody tries to stop us. Here\u0026rsquo;s an example: back at Intel, our team developed an extension for Wireshark that added support for a new protocol. It was very useful for all the engineers involved, so we wanted to share it. But the red tape was impossible to manage - endless approvals, meetings with managers, lawyers asking questions about leaking intellectual property, and so on. After a couple of months we gave up. In contrast, at ISC we implemented a new DHCPv6 option in Kea that was not yet supported by Wireshark. I wrote the code that extended Wireshark, sent a patch that was accepted, and the whole thing was complete in a day or two.\u0026rdquo;\nHe adds: \u0026ldquo;Another great thing about working at ISC is that we get involved in some long-term activities that are good for the whole Internet; my personal favorite is the standardization work done in the IETF. In my opinion, this is the highest honor an engineer can achieve. When I work on a specific implementation, I can fix or improve just one piece of software. But when I work on a standard, I can fix or make things better for all of the implementations.\u0026rdquo;\nTomek says that one of the best professional compliments he\u0026rsquo;s ever received was from a Cisco engineer who was using Dibbler, his earlier DHCPv6 software. Tomek was surprised and asked why he was using Dibbler instead of Cisco\u0026rsquo;s own code. The engineer said, \u0026ldquo;It\u0026rsquo;s easier to use and it just works.\u0026rdquo; Tomek doesn\u0026rsquo;t write as much code as he used to, but in the daily discussions he has with his team he likes to push for solutions that are transparent and easy to use. We know our users and customers appreciate that!\nTomek is now the father of a three-year-old daughter, so his spare time is limited, but in those rare moments when time permits he loves to watch the skies and practice astronomy, a bit of satellite communications, and basically anything else related to space. He has been building a home observatory for the last few years, where he enjoys working on his astrophotography. He says: \u0026ldquo;The progress on the observatory is slow, but it\u0026rsquo;s very rewarding. I did the designs and most of the construction work myself, except the roof. When I\u0026rsquo;m really tired after a week of doing all the DHCP stuff, this is my place to unwind, relax, look up, and chase some asteroids.\u0026rdquo;\nHe\u0026rsquo;s even written some automation software for his observatory, which is (of course) open source; that and all of his other code is available on GitHub. \u0026ldquo;I like to release all the code I write as open source, even for my own personal projects. I don\u0026rsquo;t think anyone will ever use it, but the mere fact that it\u0026rsquo;s public makes me try harder to write better code.\u0026rdquo;\nISC is fortunate to have Tomek leading our DHCP development team.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers and users. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-mrugalski/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer – Tomek Mrugalski!"},{"categories":["Kea","Open Source","Stork"],"contents":"ISC launched the Stork management application almost exactly five years ago, in November 2019, as an experimental tool. Today, Stork is a robust and valuable management application, and with this release full commercial support is also available.\nMaintain DHCP Service Uptime When your DHCP system isn’t configured correctly, or you have reachability problems, or a server has failed, it is an urgent problem. It is critical to find out and restore service quickly before other network services that rely on DHCP start failing. Stork can enable you to monitor a multi-server DHCP system quickly, making changes in status obvious and easy to spot.\nStork is an open source project that provides a web-based graphical interface for monitoring, troubleshooting, and maintaining the configuration of Kea DHCP servers. Stork provides a layer of administrative control for your Kea servers, including integration with LDAP for administrator authentication and authorization.\nIdentify problems fast A graphical interface is particularly helpful for spotting red flags quickly. The Stork agent can pull information from both the Kea server and the platform it is running on, combining configuration information with current state.\nMonitor CPU and OS data, including speed, memory, memory usage, operating system, and software versions View DHCP pool utilization, including shared network utilization View lease activity statistics, or search for an individual lease to view details See which Kea servers are in a High Availability relationship and monitor their failover status Click to view Kea server log tails, or to check on connectivity between Stork and Kea Spot changes in DHCP traffic activity in customizable Grafana charts View Subnet Utilization Inspect Leases Perform frequent, recurring, configuration tasks more easily Stork allows every member of the administration team to make configuration changes, without having to remember CLI commands. A built-in configuration checker will verify these before applying them.\nView the current running configuration of any Kea server in JSON structured format Configure and manage DHCP host reservations via GUI*, including related DHCP options Configure and manage subnets and pools via GUI*, including labeling Grafana tables with subnet names (*) These two features require commercially licensed Kea hooks, which implement the Kea API for remote management of these functions.\nModify Subnet Configuration Manage Host Reservations New Stork demonstration system Still not sure?\nRecently, we have launched an online Stork demo system and we invite you to try it out at: https://demo.stork.isc.org/login. This demo version resets every hour. If you would prefer a more extended trial, you can install our packaged demo version, which includes a DHCP traffic generator, a handful of virtual Kea servers, and of course the Stork agents and server. One of our QA engineers has recorded a short video that walks you through the entire installation process for the demo.\nStork 2.0 - The First Stable Version This release of Stork addresses an important security requirement: users are now required to change their password from the default startup password upon first usage. The login screen is now customizable, so the administrator can add local instructions. Stork now reports on newer software versions available (although with this release that information is static, it will be dynamic in the future). There are many other bug fixes and updates, including updated Grafana templates, and we have suppressed some repetitive log messages that users found annoying.\nWith this release, the Stork team will begin maintaining a stable version for production usage, while releasing new features on a Stork 2.1.x development branch. We plan to create a new stable version every six months, to maintain our rapid development cadence, and support each version for a total of nine months to allow time for migration. Please refer to the ISC Software Support Policy to learn about the planned Stork release schedule.\nDownload and Installation To download Stork sources, go to ISC\u0026rsquo;s Downloads page. Release notes are posted beside the tarball download link. Most users prefer to install our packages for Debian, Alpine, or RPM-based systems from ISC\u0026rsquo;s Cloudsmith repo.\nThe Stork Quickstart Guide will get you up and running quickly. The Stork Administrator Reference Manual (ARM) may also be helpful.\nStork Professional Technical Support Available With this release, ISC is now offering professional technical support for Stork. This will be offered as a line-item on a Kea DHCP support contract. Support for Kea is available as an annual subscription, with several levels:\nBronze - Premium and Subscriber hooks; ASNs; email support during business hours Silver - Premium, Subscriber, and the Role-Based Access Control (RBAC) hook; ASNs; 24x7 email and phone support with SLAs Gold - Premium, Subscriber, and the RBAC hook; ASNs; 24x7 email and phone support with faster response times The annual cost of our Kea support subscriptions is based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet. Please feel free to contact us for more information.\nReferences Stork online demo system Stork-users mailing list Stork Administrator Reference Manual (ARM) Stork Release Roadmap ","permalink":"/blogs/stork-2-0/","tags":["Kea","Stork"],"title":"Stork 2.0: Open Source DHCP Management Tool"},{"categories":["BIND","Open Source","Security"],"contents":" On the long-term scale of human history, 25 years is nothing. But on the Internet, 25 years is a very long time!\nIn 1999, maybe you connected to the World Wide Web via a 33.6 Kbps modem in your living room, so you could do an AltaVista search for the latest updates on the Y2K problem. Or maybe you mostly used your dialup connection to send email through your AOL account. Or perhaps you didn’t even have Internet access at home yet at all! But plenty of companies, organizations, and governments were already using the Internet to get their business done.\nAs anyone who has ever used any software knows, no matter how careful programmers are, software will have bugs in it. Some bugs are minor and fairly benign, while others are much more significant and widespread. Researchers at MITRE recognized that it would be essential to have a central clearinghouse of information about these important software vulnerabilities to ensure that they could be addressed properly, which led to the creation of the Common Vulnerabilities and Exposures (CVE) Program in 1999.\nISC is proud to have been a participant in the program since its inception, and is pleased to help commemorate its 25th anniversary in October 2024. Since the program’s earliest days, ISC has been responsibly reporting software vulnerabilities. The first BIND CVE, CVE-1999-0186, was published on September 29, 1999, and there have been approximately 100 more BIND CVEs since then.\nIt\u0026rsquo;s hard to be excited about finding major bugs in our software, but ISC devotes significant resources to maintaining a system for soliciting, analyzing, responding to, fixing, and disclosing any security vulnerabilities that are discovered. We try to be transparent with our users so that they can feel confident that, even when a vulnerability is found, we will address it appropriately. We maintain a matrix of BIND vulnerabilities at https://kb.isc.org/docs/aa-00913 that users are encouraged to consult at any time, and invite anyone who thinks they may have uncovered a BIND security issue to follow the instructions at https://www.isc.org/security-report.\nAs the CVE program has evolved, we have joined it as a Numbering Authority so we can assign our own CVE identifiers; we also coordinate with other vendors, researchers, and organizations to manage protocol-wide vulnerabilities.\nFor anyone interested in learning more about the CVE scoring process and how it applies to the DNS, ISC staff member Cathy Almond will be presenting at the upcoming DNS-OARC 43 meeting in Prague on October 27, 2024; details are at https://indico.dns-oarc.net/event/51/contributions/1096/.\nISC remains committed to responsibly reporting CVEs in any of our software, as part of our mission to keep the Internet safe and open to all. We thank The MITRE Corporation, the Homeland Security Systems Engineering and Development Institute (HSSEDI), and the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) for their longtime stewardship of the CVE program. Read the 25th Anniversary Report at https://www.cve.org/Resources/Media/Cve25YearsAnniversaryReport.pdf.\n","permalink":"/blogs/cve-anniversary/","tags":["BIND","Open Source","Security","CVE"],"title":"CVE 25th Anniversary"},{"categories":["BIND","Open Source"],"contents":"Today, ISC is proud to release BIND 9.20.0, our newest stable branch. This new release of BIND is available on our downloads page.\nThis branch will be supported for four years, through the first quarter of 2028, per our current software release plan. After several months of experience in production, we plan to declare this version to be ESV (an Extended Support Version), likely around the end of 2024 or early 2025.\nApplication Infrastructure Improvements in BIND 9.20.0: The application core (the infrastructure that holds everything together) has been rewritten to use libuv asynchronous event loops exclusively. In BIND 9.16, we introduced a new networking manager using libuv as an asynchronous event handler on top of the existing application infrastructure. In BIND 9.20, the transition to libuv asynchronous loops is complete and BIND 9 is powered by libuv from the ground up. This simplifies and streamlines the internal infrastructure and allows us to keep the data processing pinned to threads and reduce context switching, which improves overall resource consumption. At the same time, we are using specialised threadpools provided by libuv to offload long-duration tasks and, instead of quantising the work on our own, we rely on the operating system scheduler to provide fair scheduling between the networking and offloaded threads. This simplifies the code that powers Response Policy Zones, Catalog Zones, Zone Transfers, DNSSEC Validation, and a couple other long-running tasks - and improves latency when long-running tasks are mixed with normal DNS queries. A new database backend, called QP trie, has been added to BIND 9 and made the default cache and zone database, replacing the venerable RBTDB (Red-Black Tree Database). The QP trie database uses the Userspace RCU (Read-Copy-Update) Library, which is now mandatory to compile and run BIND 9. Using Userspace RCU will allow us to remove POSIX locking as a synchronisation mechanism and replace it with Quiescent-State-Based Reclamation (QSBR) as a memory reclamation mechanism. Much work remains to be done, but in the future you should expect BIND 9 to be more scaleable on systems with many CPUs. The DNS name compression algorithm used in BIND 9 has been revised: it now compresses more thoroughly than before, so responses containing names with many labels may have a more compact encoding than before. Improvements in DNSSEC Support DNSSEC Policy is now the only option for managing signed zones. The auto-dnssec option has been removed. Support for DNSSEC multi-signer model 2 (IETF RFC 8901) when using inline-signing was added. PKCS#11 Support has been restored by utilising the new OpenSSL 3.0.0 Engine API. HSM support was added to dnssec-policy. Keys can now be configured with a key-store that allows users to set the directory where key files are stored and to set a PKCS#11 URI string. The latter requires OpenSSL 3 and a valid PKCS#11 provider to be configured for OpenSSL. Feature Updates Catalog Zones schema version 2 (as described in the \u0026ldquo;DNS Catalog Zones\u0026rdquo; IETF draft version 5 document), is now supported by BIND 9. More Extended DNS Errors are now supported. The DNS over TCP and DNS over TLS implementations have been refactored to use a unified transport. This in turn allowed us to add the new PROXYv2 transport. PROXYv2 support is available for all DNS transports currently supported by BIND. Support for User Statically Defined Tracing (USDT) probes has been added. These probes enable fine-grained application tracing using the perf command and introduce no overhead when they are not enabled. We plan to add more user probes in the future releases. The statistics channel now includes information about incoming zone transfers in progress. Resolver Performance Testing Results For a description of the resolver performance test bed used to create the charts below, please see this earlier blog. Note that the most impressive change is significantly reduced memory usage with better latency. The improvements we see in BIND 9.20 vs BIND 9.18 are a continuation of the improvements we saw from BIND 9.16 to BIND 9.18. BIND 9.20 is better-performing than BIND 9.18, and far better-performing than BIND 9.16.\nCharts shown in this section show aggregate results from three repetitions of the test on each version. The full line shows the average of the three tests, while the colored background shows the range between minimum and maximum for each version. That is, the wider the color background is, the more unpredictable the characteristic is, and vice versa.\nResponse latency - How quickly does the resolver respond?\nThe most useful but also the most convoluted metric is response latency, which directly affects user experience. Unfortunately, DNS latency is wildly non-linear: most answers will arrive within a split-millisecond range for all cache hits. Latency increases to a range of tens to hundreds of milliseconds for normal cache misses and reaches its maximum, in the range of seconds, for cache misses which force communication with very slow or broken authoritative servers.\nThis inherent nonlinearity also implies that the simplest tools from descriptive statistics do not provide informative results.\nTo deal with this complexity, the fine people from PowerDNS developed a logarithmic percentile histogram which visualizes response latency. It allows us to see things such as:\n95 % of queries were answered within 1 ms (cache hits) 99 % of queries were answered within 100 ms (typical cache misses) 99.5 % of queries were answered within 1000 ms (problematic cache misses) On these charts, the lines that are closer to the bottom left corner of the chart are showing lower latency, which is a better result. Flat lines on the top at the 2000 ms mark show client timeouts.\nUDP Traffic For UDP performance tests, we concentrate traffic we captured on 15 real resolvers on a single test box, pushing BIND to its limits. The first chart shows latency in the first minute of the test, i.e. \u0026ldquo;cold cache\u0026rdquo;. This is when the resolver is under highest stress.\nUDP Response latency This chart shows the cold start of BIND:\nVersion 9.20.0 is not able to answer roughly 3 % of queries within the 2000 ms client timeout - which is a consequence of us pushing the resolver to its limits. Under the same conditions, BIND 9.18.28 was not able to answer 15 % of queries, and the end-of-life version BIND 9.16 could not answer roughly 25 % of queries. In other words, this chart shows a massive improvement in the cold-cache efficiency of BIND 9.20.0.\nAfter the first minute, the cache is already populated with records and becomes \u0026ldquo;hot\u0026rdquo;. This changes latency for clients significantly:\nIn this particular scenario, hot cache latency has not changed significantly between BIND 9.18.28 and BIND 9.20.0, while end-of-life BIND 9.16 struggled to keep up with the load. The wide colored background around the blue line shows large instability in latency across three repeated tests.\nUDP CPU utilization Let\u0026rsquo;s have a look at CPU load during the first two minutes of the test. We monitor the time BIND processes spent using the CPU as reported by the Linux kernel Control Group version 2 metric usage_usec, and then normalize the value in a way which gives 100 % utilization = 1 fully utilized CPU. Our test machine has 16 cores, so its theoretical maximum is 1600 %. CPU usage is a cumulative metric and we plot a new data point every 0.1 seconds.\nHere we can see higher CPU load in the first 11 seconds of the test for BIND 9.20.0, and generally slightly lower CPU usage in the remainder of the test, compared to BIND 9.18.28. Effectively this shows better parallelization of work in BIND 9.20.0, which is how we reached significantly improved response latency in the cold-cache scenario.\nUDP Memory usage Similarly to CPU usage, we use the Linux kernel Control Group version 2 metric memory.current to monitor BIND 9\u0026rsquo;s memory consumption. It is documented as \u0026ldquo;the total amount of memory currently being used\u0026rdquo; and thus includes memory used by the kernel itself to support the named process, as well as network buffers used by BIND. Resolution of the resource monitoring data is 0.1 seconds, but the memory consumption metric is a point-in-time value, so hypothetical memory usage spikes shorter than 0.1 second would not show on our plots.\nIn the first 11 seconds of the test, where CPU load is highest, the memory consumption is only at one-third of the usage we saw for versions 9.18.28 and 9.16. This indicates much lower overhead when handling cache-miss traffic. The very narrow colored background around the BIND 9.20.0 line also shows that memory consumption is more predictable than it used to be. Another angle revealed by this chart is that the old BIND 9.16 allocator effectively did not return memory to the operating system at all, and BIND 9.18 still had significant room for improvement in this area.\nOver time the memory usage of versions 9.20.0 and 9.18.28 are slowly converging at the right-hand side of the chart, which indicates that per-record cache overhead in 9.20.0 is somewhat larger than it was in 9.18.28.\nTCP Traffic For the TCP performance test, we concentrate traffic we captured on five real resolvers on a single test box and force all clients to use TCP to the resolver. Individual clients keep the TCP connection open for up to 10 seconds after their last query.\nTCP Response latency The following chart shows latency in the first minute of the test, i.e. \u0026ldquo;cold cache\u0026rdquo;. During this period the resolver is under highest stress as it must do most DNS resolutions, and at the same time accept more TCP connections from clients than during steady operation.\nBIND 9.18.28 is able to handle the initial load but still 0.5 % of queries time out. On version 9.20.0 less than 0.2 % of queries time out. Clearly, BIND 9.16 was hopelessly timing out.\nAfter the first minute the cache is already populated with records and becomes \u0026ldquo;hot\u0026rdquo;, and returning clients also have a chance to reuse existing TCP connections for subsequent queries. This changes latency for clients significantly:\nWe can see that version 9.18.28 improved latency for roughly 25 % of queries.\nTCP CPU utilization CPU load in the first three seconds of the test is higher for BIND 9.20.0, showing better parallelization of work when lots of TCP connections needs to be accepted at the beginning. For the remainder of the test, version 9.20.0 generally has slightly lower CPU usage compared to BIND 9.18.28. And again, BIND 9.16 shows it\u0026rsquo;s not up to the task and its CPU usage is very high and unstable through the whole test.\nTCP Memory usage For TCP-only traffic we can see massive improvement in memory consumption. Version 9.20.0 consumes roughly 6x less memory than version 9.18.28 while handling the same traffic, and it still provides better latency and consumes less CPU time.\nDNS-over-TLS Traffic For the DoT performance test, we concentrate traffic we captured on two real resolvers on a single test box and force all clients to use DNS-over-TLS to the resolver. Individual clients keep the TLS connection open for up to 10 seconds since their last query. This time we skip BIND 9.16 because it does not support DoT at all.\nDoT Response latency The following chart shows latency in the first minute of the test, when the cache is empty or \u0026ldquo;cold\u0026rdquo;. During this period the resolver is under highest stress as it must generate queries to resolve most names and at the same time accept more DoT connections from clients than during steady-state operation. The TLS handshake required for DoT is an expensive operation.\nAfter the first minute the cache is already populated with records and becomes \u0026ldquo;hot\u0026rdquo;. Returning clients have a chance to reuse existing TLS connections for subsequent queries.\nThese two charts show that BIND 9.20.0 provides better latency for roughly 50 % of the queries sent over DoT.\nDoT CPU utilization Except for better parallelization at the beginning, the CPU load of both versions is the same.\nDoT Memory usage For DoT-only traffic we can again see massive improvement in memory consumption. Version 9.20.0 consumes roughly 3x less memory than version 9.18.28 while handling the same traffic, and also providing better latency.\nISC-provided packages for BIND 9 There has been some confusion about where to find the ISC-provided packages for BIND 9.20. We made a change for the start of the new 9.20 branch, suggested by a discussion on the bind-users mailing list. The goals was to enable everyone to update easily from the package repository, without causing an unexpected upgrade from 9.18 to 9.20. ISC provides packages for Fedora, Ubuntu and Debian labelled as bind, bind-esv (for extended support version) or bind-dev (development version). Since the bind and bind-esv repos already had BIND 9.18 in them, if we had put 9.20 in the bind repo, people updating who intended to just get the maintenance update, would be updated a full major version. This seemed like an unexpected and therefore undesirable outcome.\nThe new BIND 9.20.0 version is available now in the repos labeled bind-dev. After a release or two, we will move BIND 9.20.x to the bind repo, when we have a new 9.21.x version to post in the bind-dev repository.\nReferences Release notes for BIND 9.20.0. Please see this new KB article on RRset Limits in Zones, a new limit in BIND 9.18 and 9.20 that may require configuration for some users. BIND Administrative Reference Manual for 9.20.0. We have updated the BIND Significant Features Matrix. ISC’s Software Support Policy and Version Numbering explains the release cycle. ","permalink":"/blogs/2024-bind920/","tags":["BIND","Open Source"],"title":"BIND 9.20 Brings Streamlined Core, Some New Features"},{"categories":["Open Source"],"contents":"Kea/DHCP news At the end of May, we released Kea 2.6.0, our latest stable version of Kea. This major version introduced a number of features requested by network operators: a new ping check hook for subscribers, the hub-and-spoke High Availability feature, an option to \u0026ldquo;stash agent options\u0026rdquo; to preserve them for renewals, and a new performance monitoring hook. We also re-implemented the RADIUS client in our RADIUS hook library, and added support for more client options.\nTo help users upgrade to this new version, we published a support Knowledgebase article on Changes to Be Aware of When Migrating to Kea 2.6.0.\nA lot of ISC DHCP users are just starting to look into migrating to Kea. We have collected some hopefully helpful tools and advice on a web page. Please feel free to post your questions about how to tailor your Kea configuration to meet your requirements on the kea-users mailing list, which has been quite active lately.\nThe Kea team produces a number of pre-compiled packages for Kea, and we think most of our users install from these. We have rearranged the Kea repositories on Cloudsmith so that all future development branches are now in the kea-dev (open source) or kea-dev-prv (subscriber-only) repositories. Read more in this blog post.\nStork update For those of you who have not yet tried our Stork utility for Kea, we are currently working on hosting a live demo site for you to try out. Stork can aggregate information from a number of Kea servers at once in a single dashboard. This is not intended to replace your network fault monitoring system, but it can help identify dhcp issues faster. Stork also now leverages several Kea hooks to provide a graphical interface for managing your host reservations and subnet configuration. The graphical interface can help prevent errors and can broaden the team able to make configuration changes. For the easiest experience, we recommend installing Stork from the packages in our Cloudsmith repository and using the Stork Quickstart Guide. Stork is currently officially an experimental development project, but we are planning to release a stable version and begin offering professional support for Stork towards the end of this year.\nFurther Reading\nWe have a new KB on Importing External Certificates to Stork Changes to be aware of when migrating to Kea 2.6.0 We have updated the KB article on Logging for ISC DHCP users . We also have a new KB article on redefining standard options, and we have updated the list of options supported in ISC DHCP and Kea . If you couldn\u0026rsquo;t attend our December webinar on configuring Kea using Stork, check out the recording for a great demo. A new BIND stable branch is coming - soon, we hope! BIND 9.16 maintenance ended several months ago, after four years of active support (9.16.0 was released in March, 2020). If you have not already done so, please update to BIND 9.18, as soon as possible. We are confident that the performance and stability of the 9.18 branch of BIND is much improved over the older 9.11 and 9.16 branches.\nWe currently plan to release BIND 9.20 in mid-July. This release has been delayed, unfortunately, because we have been sidetracked by a spate of DNS and BIND vulnerabilities that were particularly difficult to mitigate. We always prioritize fixing security vulnerabilities, but obviously, their discovery is an unplanned distraction.\nWhat\u0026rsquo;s the goal of all the refactoring? BIND 9.20 continues our long-term commitment to refactoring and modernizing BIND. Fifteen or so years ago, DNS operators weren\u0026rsquo;t modifying their zones as frequently, or continuously maintaining large abuse filters. A lot of tasks that were occasional back then are constant today. In this most recent development cycle, the BIND developers have done a lot of work to reduce the pause in regular query processing that could sometimes occur when BIND was occupied with one of these periodic tasks. Many database functions have been refactored to use a new database (qp-trie), in place of the legacy red-black tree, as part of this effort. This has been a multi-year effort to enable BIND to perform better in today\u0026rsquo;s busier network environments, and take advantage of more powerful new hardware. We don\u0026rsquo;t yet have final numbers, but we expect BIND 9.20 to be significantly more efficient than 9.18 or 9.16 in typical deployments where task management has been an issue.\nSince we mentioned zone transfers and user suspicions that they were blocking other query activity, have you noticed the new feature in the statistics channel, released in 9.18 and 9.19 earlier this year? This provides an HTML table showing incoming zone transfers by view, zone name, type, serial, transfer state, source, destination, transport, bytes, and messages received. This new statistic provides essential visibility into lengthy zone transfers.\nAll the encryption options This is also our first stable branch with complete support for encrypted DNS. The US Government\u0026rsquo;s Encrypted DNS Implementation Guidance for federal agencies includes explicit recommendations for BIND deployment in section A3.1, using DNS over HTTPs or DNS over TLS. That document was written before the full support for encrypted DNS was available in a stable branch: with BIND 9.20, it is now fully supported. In the coming development cycle we plan to add support for the new QUIC transport, which is one of several modern protocols that are collapsing multiple OSI layers for greater efficiency.\nOne other new feature in 9.20 that may prove to be particularly useful is support for the Proxyv2 protocol. As explained in this blog post, this feature will pass client connection information to a backend system, which will allow it to perform some of the functions of the EDNS Client-Subnet Identifier feature, but without the cache impact. This may not help those of you who have peering agreements that require providing ECS information, but for those using the information in your own network, this may be more efficient.\nDeleg record ISC is contributing to specifications for a proposed new DNS record, the Deleg record, which would provide information about server capabilities. This could provide a more efficient way of finding, for example, a server that supports DNS over TLS. This draft has progressed to the point that the IETF has just chartered a new Deleg working group. For a quick update on IETF work on the DNS, read the slides for the recent talk Dave Lawrence (tale) gave at NANOG91, where he characterizes Deleg as the biggest change to the DNS since DNSSEC. Several ISC staff will be attending the upcoming IETF120 meeting in Vancouver, to work on this and other issues. As always, we would love to meet any users of our software, or anyone who wants to collaborate with us.\nFurther Reading\nBlog announcing the new Proxyv2 feature More Proxyv2 configuration examples dig versions and default EDNS UDP buffer size Changes to serve-stale option stale-answer-client-timeout in BIND 9.19 and newer (including 9.20) Stub zones don\u0026rsquo;t work when primaries are configured for \u0026ldquo;minimal-responses yes;\u0026rdquo; Rate Limits in BIND 9 Effective vs. DNSBomb Vulnerability Did you know we maintain a BIND 9 Docker image? Recent presentations\nISC staff have been traveling quite a bit, recently attending the CENTR Jamboree in Copenhagen, IETF in Brisbane, RIPE in Krakow, NANOG in Kansas City, and an ICANN meeting in Kigali. We normally post all our conference presentations on this web site. Here are a few recent ones: Encrypted DNS Policy call - Matthijs Mekking - slides, recording RIPE 88 - Open Source QA Process \u0026amp; Risk Mitigation - Petr Špaček - slides, recording RIPE 88 - The World* Turned Upside Down - Jeff Osborn - slides, recording NANOG91 - Open Source QA Process \u0026amp; Risk Mitigation - Victoria Risk - slides, recording In addition, ISC\u0026rsquo;s General Counsel, Rob Carolina, appeared on the N2K/Cyberwire podcast earlier this summer to speak about, among other things, Quad9 and the DNS filtering case.\n","permalink":"/blogs/2024-summer-update/","tags":["Open Source"],"title":"Summer Update from ISC"},{"categories":["BIND","Open Source","Security"],"contents":"Authors: Jason Goertzen, David Joseph, Peter Thomassen\nBIND \u0026amp; liboqs: A PQC DNSSEC Field Study The DNS protocol is the internet’s phonebook. Send off a domain name, and receive an IP address, to which all further communications can be sent. Validating the authenticity of that IP address requires cryptography, however, and this is where DNSSEC comes in. But the cryptography used to sign domain name/IP address pairs – known as records – will be vulnerable to quantum computers once they are available, and thus faces the same complex transition to post-quantum cryptography (PQC) as the rest of the internet.\nPQC is almost ready Even if estimates for Q-day (potentially within a decade or two) might seem optimistic, rolling out new DNSSEC signing schemes is a complex and lengthy process. The PQC algorithms themselves are on the cusp of being standardized by NIST, after an 8-year process, and folks around IETF are just now starting to research how to ready DNSSEC for the quantum threat. These new PQC algorithms have much larger signatures and keys which could cause delivery issues if the DNS protocol is not adapted for these needs. Clearly we need to investigate …\nLet’s test it out in DNSSEC For the past year, deSEC and SandboxAQ have been evaluating how PQC would behave if it were slotted in with no additional protocol changes. This involved deploying zones – portions of the DNS namespace managed by a specific organization – signed with Falcon, Dilithium, SPHINCS+, as well as the stateful hash-based signature scheme XMSS. In order to deploy these zones, we modified BIND 9 and PowerDNS to support all of these algorithms. In this post we’ll look at BIND 9, noting that the modifications to both implementations are rather similar.\nWe used liboqs with an OpenSSL 3 provider In order to add support for the PQC algorithms, we took advantage of the open source project Open Quantum Safe. Open Quantum Safe provides a library called liboqs which contains the PQC algorithms selected by NIST for standardization. The project also offers an OpenSSL 3 provider (“oqs-provider”) which exposes those algorithms via the OpenSSL API. We ultimately used oqs-provider as the other DNSSEC algorithms were already using the OpenSSL API. Once one of the algorithms was added, supporting others was easy. We’ve published the source code on GitHub.\nStateful hash-based signatures Stateful hash-based signatures are a special type of PQC, already standardized. In contrast to the other signing schemes we tested, signers have to carefully track a “state” (in essence, a counter), whose value must change with every invocation so as to not break security. In principle, they’ll work well for some applications, including DNSSEC, but the complexities of tracking state make them impractical for use cases like load balancers signing TLS transcripts. In order to test these types of signatures, we concentrated on the performance impact, and did not worry too much about managing this state correctly. (In other words, we allowed ourselves to insecurely reuse state, and the signatures implemented this way do not provide authenticity; however, they are perfectly suitable for investigating protocol-related performance aspects like deliverability.)\nLiboqs is also in the process of adding support for stateful hash-based signatures. This study is the first project that is utilizing the new stateful signatures API. However, since this API is so new, oqs-provider does not support stateful signatures. This means that we had to use the low level API provided by liboqs instead. The API worked as expected, so it wasn’t too much effort to add XMSS and XMSSMT directly to the software.\nKey generation did require special treatment, however, so we made a small change. Stateful hash-based signature schemes can generate a limited number of signatures, and zone operators would likely want to be able to pick the appropriate parameter set for their zone size. Both for XMSS and XMSSMT there are a plethora of parameter sets. The more signatures you wish to allow, the larger they become. In practice, it’s likely that operators of smaller zones would prefer to use parameter sets that support fewer signatures and benefit from smaller signature sizes. However, if a parameter set was standardized and the number of signatures it generated was too small, it would prevent operators of large zones from deploying XMSS or XMSSMT.\nIt would not be reasonable to have a DNSSEC algorithm number standardized for each parameter set (12 sets for XMSS and 16 for XMSSMT). Therefore, we took a similar approach to RSA modulus sizes: “Support all of the parameter sets of XMSS and XMSSMT, or you aren’t compliant”. Our implementation differentiates between parameter sets based on information stored within the DNSKEY. The nice thing is that liboqs does this for us by hiding an internal OID into both the secret and public keys, requiring only the introduction of the corresponding mappings to translate between the XMSS/XMSSMT parameter sets and DNSSEC algorithm numbers without losing the specific parameter details.\nWe modded BIND 9 to support parameter sets In BIND 9 we create a custom function that takes the passed-in algorithm name. If it’s a type of XMSS or XMSSMT, it converts the algorithm name to the generic XMSS or XMSSMT name and then returns the internal OID that specifies the intended parameter set. This OID then gets passed to the key generation function. We had to add an extra XMSS specific header file in order to achieve this, but this can likely be abstracted away if the dst_func_t struct can be modified with a “parameter init” function.\nExperimental setup In order to evaluate how these quantum-resistant signature algorithms perform in real world DNS, we deployed several zones, each one signed by one classical or quantum-resistant algorithm. The BIND-operated test zones were set up with a KSK/ZSK presigned scheme (in contrast to the PowerDNS zones which used a CSK setup with dynamic signing).\nWe then deployed experiments across the RIPE ATLAS network. This is a network of roughly 10,000 small nodes distributed across the globe, which researchers can use to run various kinds of network-related measurements.\nTo run DNSSEC experiments, we used a BIND nameserver (version 9.19.7, plus our modifications) which was located in the United States, and clients from across the RIPE ATLAS network submitted DNS requests using their locally configured resolver(s), receiving PQC-enabled responses. We then looked into how many responses arrived and what their properties were, depending on algorithm, transport and the like.\nResults We find that in general, measurement results are quite noisy. We therefore applied pre-filtering, ignoring individual results from probe-resolver combinations that answered incorrectly for the zone signed with algorithm 8 (RSASHA256). (A correct response required the expected A record, or NXDOMAIN, depending on the case.) The pre-filter did not include a requirement for the AD bit, i.e., resolvers not performing validation were acceptable as long as the response appeared correct in terms of return code and record content.\nThe following two figures show results from BIND-operated zones for an existing name (Fig. 1) and a non-existing name (Fig. 2), with the aforementioned pre-filter applied. The two quadrants at the top are for probes querying their resolver via UDP, and the ones at the bottom via TCP. Only for queries on the right, the DO bit was set.\nFigure 1: Results for an existing name.\nFigure 2: Results for a non-existing name. NSEC3 is indicated by a “3” suffix; otherwise NSEC is used. As can be seen, nearly all resolvers appear to be compatible with algorithms 8, 13, and 15, independently of transport and DO bit, for both NSEC and NSEC3 setups. This is no surprise for algorithm 8 (due to the pre-selection), and also expected for algorithm 13 (which some TLDs have deployed, presumably after verifying interoperability). For algorithm 15, we were less certain ahead of time, but it’s good to see that it does not appear to cause problems. (For numbers on actual validation support, see Fig. 4 below.)\nRegarding PQC, it can be clearly seen that response delivery rates go down significantly as response sizes increase. This most prominently applies along the algorithm dimension, where the largest packet involved in a resolution happens to correlate with the algorithm numbers we chose – as a result, SERVFAIL responses become more frequent for SPHINCS+ and XMSS-type algorithms than for Dilithium2. Falcon, having the smallest signatures, appears to have the fewest issues.\nIt appears that many resolvers don’t obtain DNSSEC records by default (perhaps because validation is turned off, or because the zone’s DS record already indicates an unsupported algorithm). Only when setting the DO bit to request the resolver to obtain these records do larger packets occur, and deliverability problems might get worse. Whether that really is the case depends on whether the increase in packet size is such that a particular deliverability barrier (such as MTU) is exceeded. It appears that this mostly affects SPHINCS+ and XMSS via TCP, nullifying the benefit that those algorithms otherwise enjoy from using this transport when the DO bit is absent.\nSimilar considerations apply to NSEC vs. NSEC3, where transmission boundaries may be crossed depending on the case. In the scenarios we evaluated, this does not appear to have significant effects, except in the case of Dilithium2, where the SERVFAIL rate goes from 35.8% to 43.7% when using NSEC3.\nTaking a slightly different perspective, Fig. 3 shows the frequency of correct (0) vs. incorrect (1) responses (for an existing name). While correct fractions for conventional algorithms are very close to 100% in all cases, PQC algorithms again show dependency on packet sizes as mediated by the choice of transport and DO bit, and in line with expectations. In particular, when the DO bit is not set, going from UDP to TCP increases success rates from around 70% to around 80% (and 90% in case of Falcon). A similar trend is observed when the DO bit is present, except that overall numbers are lower, going from 50% to around 70–75% when going from UDP to TCP. Exceptions are again Falcon (performing better, around 90%) and XMSS (performing a little worse than others).\nFigure 3: Fraction of correct responses (for an existing name). A surprising result was the AD bit distribution in responses (Fig. 4). First, it can be seen that between 33% and 46% of probe–resolver pairs claim DNSSEC validation for conventional algorithms. This is in line with what’s expected from DNSSEC validation deployment studies.\nMore interestingly, we are pretty certain that no probe was connected to a resolver that is able to validate signatures from our PQC implementations. Still, when the DO bit was set, responses from about 8.5% of probe–resolver pairs indicated for the Falcon algorithm that signatures had been validated. This appears to be clearly broken, and perhaps deserves a deeper look into what resolver software produces such responses. Curiously, the problem goes away when using TCP.\nFigure 4: AD bit (for a non-existing name). Testbed If you’d like to play around with our test zones, you can do so using our test bench at https://pq-dnssec.dedyn.io/.\nIn addition to our signing implementation for BIND authoritative nameserver (available at bind9.pq-dnssec.dedyn.io:53), we have also implemented validation for the resolver function (running on port 5304).\nYou can query this PQC-enabled BIND resolver directly from the above page (using a DoH proxy), and also test interoperation by selecting PowerDNS Recursor to validate BIND signatures or vice versa. As PowerDNS uses a different setup (CSK instead of KSK/ZSK, and dynamic NSEC3), results may vary slightly across scenarios.\nIn case you want to use the command line, the page also has information on which exactly are the zones to query, accompanied by links to source code and additional measurement results.\nConclusion All in all, our measurements indicate that the PQC signing schemes investigated are not well-suited for use in the existing DNS ecosystem, with the potential exception of Falcon (which might or might not withstand cryptographers’ scrutiny). Some differences exist between configurations (such as whether a combined key is used or ZSK/KSK are split), but appear to have only gradual impact without qualitatively changing the picture.\nThese findings confirm the suspicion that DNSSEC is not going to work well with PQC unless further changes are made. These changes might be in terms of improving signing schemes so that they do fit the ecosystem’s constraints; alternatively, one may try to change message payloads in a way that is compatible with existing constraints, but at the same time evades overly large messages (e.g., by splitting keys). It is not clear what’s the best approach, and thus more research is needed.\nThis project was funded by NLnet Foundation and supported by SSE.\n","permalink":"/blogs/2024-pqc-study/","tags":["BIND","Open Source","Security"],"title":"BIND \u0026 liboqs: A PQC DNSSEC Field Study"},{"categories":["Security","Open Source","BIND"],"contents":"Update posted June 27th, 2024\nThis was not the fraud we thought it was We have learned that emails we originally identified as abuse were sent by an external contractor engaged by ISC to conduct a focussed and short-term lead generation campaign. We have instructed the vendor to halt that campaign.\nWe clearly suffered some communications failures here. Our communication with the vendor should have made it clear that we would not be comfortable with the approach they adopted. Plus, our internal communication failed as we lacked sufficient awareness of the campaign to respond in a more appropriate fashion when we received questions about the emails.\nWe have been assured by the vendor that this was not a bulk unsolicited email campaign. We affirm our stance that bulk unsolicited email is counter to our mission in support of Internet infrastructure.\nWe apologize for any inconvenience or disruption this event may have caused. We promptly canceled our abuse complaint concerning the domain name, and we ask any of you who have taken any filtering or blocking or complaint action against the domain name or the originating IP addresses to do the same. We appreciate the outpouring of sympathy from our community, many of whom have emailed us with helpful suggestions. We thank you for your continued support.\nOriginal post below\nSadly, people on the Internet are sometimes not who they say they are, and you have to read unsolicited email with skepticism. An alert and responsible IT Professional has notified us that someone is sending spam emails, masquerading as an ISC staff person, offering information about unpublished BIND software vulnerabilities. These emails are supposedly from Bree Reed bree-r.a@tryisc.com. If you receive email like this, check to see if it is from the ISC.org domain. If it is not, the email is not from ISC.\nISC does not send unsolicited bulk marketing emails. We follow a long-established, published process for disclosing security vulnerabilities in our software (ISC Software Defect and Security Vulnerability Disclosure Policy). This includes publishing vulnerabilities in our knowledgebase and announcing the fixes on our product-specific public mailing lists.\nIf you feel you have received illegitimate communications from someone purporting to be an ISC staff member, please report it. If someone other than ISC.org is offering to provide software vulnerability information about ISC software, this is suspicious and probably fraudulent. ISC does offer professional support services, which includes advance notification of security vulnerabilities, but we have not authorized anyone else to disclose that information prior to public disclosure.\n","permalink":"/blogs/2024-spammer-impersonator/","tags":["security"],"title":"Someone on the Internet is committing fraud"},{"categories":["Kea","Open Source"],"contents":"ISC\u0026rsquo;s software engineering team is thrilled to announce the release of Kea 2.6, the newest stable branch of the Kea DHCP server. This branch, which is suitable for production deployment, brings users a variety of new features that were previewed in the 2.5 development branch over the last 10 months. For the full list of additions and updates, please read the Kea 2.6.0 Release Notes.\nWith this release, Kea 2.2 has now reached End of Life (EOL). If you are running Kea 2.2 or an earlier version, we encourage you to update to one of the current stable versions, Kea 2.4.1 (released in November 2023) or the new 2.6 branch. Please refer to the ISC Software Support Policy to learn about the full Kea release schedule.\nWe maintain two stable branches at a time, and one development branch (which eventually becomes the next stable version). We try to maintain a steady pace of new development, to offer our customers and users the features they require. Enterprise networks are getting more complex, and since DHCP is used as a control plane for devices, we see a constant stream of new feature requests. Users who have questions or suggestions about our release timeline are encouraged to share them on the Kea-users mailing list.\nSome changes in this version are not backward-compatible. Please consult our documentation to learn more about differences that may affect you. Please also refer to the incompatible changes section of the release notes.\nHub-and-Spoke High Availability Kea\u0026rsquo;s High Availability (HA) feature is a popular replacement for ISC DHCP\u0026rsquo;s failover capabilities, with two commonly used configurations. In load-balancing mode, HA allows two partners to share a lease database so that both can serve clients at the same time; if one fails, the other is already up-to-date and ready to assign leases. In hot-standby mode, one server is active, serving requests, while the other waits as a backup if the first stops working.\nMany users have asked for a hub-and-spoke arrangement for High Availability. In Kea 2.6.0, it is possible for multiple Kea servers at different locations to use a single central server as their hot-standby backup. The central server is configured with different subnets for each of the spokes, so that it can take over for any branch servers that fail. This gives administrators more flexibility in designing their networks, and eliminates the need to have a backup server at every location.\nNew Hook Libraries Kea is designed to be flexible and customizable via its use of hook libraries. By keeping some features in external libraries, the Kea base code remains small and manageable, but installations that want to use the add-ons can install them as they wish. We are constantly working on developing both new open source hook libraries for all Kea users, as well as hooks that are available for separate purchase or with a paid ISC support contract.\nNew hook libraries in Kea 2.6 include:\nThe native RADIUS hook library, which lets users take advantage of RADIUS access and accounting features, as Kea can interact directly with RADIUS. This replaces an older RADIUS hook that was based on an external FreeRADIUS-client library, which was not maintained and had serious problems. The Ping Check hook library, which replicates a feature previously available in ISC DHCP, where Kea checks the availability of an IPv4 address before assigning it to a DHCP client. The Performance Monitoring hook library, which gives the Kea DHCPv4 and DHCPv6 servers the ability to track and report performance data. Support for More Client Options We have continued to extend support for complex device options, such as vivco sub-options with multiple different enterprise IDs. Kea manages the complex option encoding for the Discovery of Network-designated Resolvers (DNR), implemented in RFC 9463, to allow configuration of DNS for both DHCPv4 and DHCPv6 over various transports, such as TLS (DNS-over-TLS or DoT), HTTPS (DNS-over-HTTPS or DoH), and others. Kea is also fully compliant with the new RFCs for Service Binding Mapping for DNS and Service Binding and Parameter Specification.\nWe continue to implement IETF RFCs in Kea: please see the DHCPv4 standards and DHCPv6 standards lists in the Kea Administrator Reference Manual (ARM).\nFor a complete list of new features, please check out the Kea 2.6.0 Release Notes.\nDocker By popular demand, this release comes with Docker images: both pre-built Docker images and Docker files are available. Users can install a single container with a specific service, such as kea-dhcp4, or use the kea-compose script that makes deployment of a cluster of containers (kea-dhcp4, kea-dhcp6, and a PostgreSQL database) much easier. Running a DHCP server in a Docker container is a bit more complicated than a typical service, as a DHCPv4 server needs to be able to receive traffic from clients that do not yet have an IP address assigned; simple port forwarding is not sufficient. Users should read about ipvlans before deploying Docker containers. Please use caution and share your feedback. The images are based on Alpine 3.19. For details, see the README file in https://gitlab.isc.org/isc-projects/kea-docker/.\nCloudsmith Packages Our Cloudsmith repositories for binary packages remain popular; we believe the majority of our subscribers are now using those repositories. Native Deb, RPM, and APK packages are available for Alpine 3.16, 3.17, 3.18, and 3.19; Debian 10, 11, and 12; Fedora 36, 38, and 39; FreeBSD 13; RHEL 8 and 9; and Ubuntu 18.04, 20.04, and 22.04. All packages are built for amd64 architecture. For details, see https://cloudsmith.io/~isc/repos/.\nKea 2.6.0 is also the first release that provides native packages for ARM (aarch64) architecture for some systems.\nKea\u0026rsquo;s official APK, Debian, and RPM packages follow a consistent packaging standard. Please see the Installation From Cloudsmith Packages and Caveats When Upgrading Kea Packages sections in the Kea ARM for more details.\nKea Hooks Basic Commercial End User License The Kea open source base code and hooks remain licensed under MPL 2.0. Certain Kea hook libraries are available as a Premium hooks package, purchased online without support. These premium hooks are subject to the Kea Hooks Basic Commercial End User License.\nThe Premium bundle of hooks is available to smaller businesses and non-profits at modest price points: we offer 12-month license subscription options for 1,000, 6,000, 15,000, and 30,000 active leases, at prices starting at $549. We think it is fair for large enterprises and service providers to pay more, but we want to offer a lower-cost option for universities and other non-profits. Please visit our online store to purchase the Premium hooks, or contact us for additional information.\nLarger deployments can access both the Premium hook bundle and additional Subscriber hooks, either without support at the Basic level, or with support at Bronze, Silver, or Gold levels. Our levels are:\nBasic - Premium and Subscriber hooks; Advance Security Notifications (ASNs); no support Bronze - Premium and Subscriber hooks; ASNs; email support during business hours Silver - Premium, Subscriber, and the Role-Based Access Control (RBAC) hook; ASNs; 24x7 email and phone support with SLAs Gold - Premium, Subscriber, and the RBAC hook; ASNs; 24x7 email and phone support with faster response times The annual cost of our Kea support subscriptions is based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet. Please feel free to contact us for more information.\nHelpful References\nKea Administrator Reference Manual (ARM) Kea Release Roadmap Kea Significant Features Matrix Kea Hooks Basic Commercial End User License ","permalink":"/blogs/kea-2-6-0/","tags":["Kea"],"title":"Kea 2.6.0 Released"},{"categories":null,"contents":"ISC has deployed numerous F-Root nameserver nodes, using an anycast routing setup.\nThese nodes are usually deployed at Internet Exchange Points (IXPs), and network operators are welcome to peer with us at IXPs to improve their access to the Root Server System.\nPeering Policy F-Root operates an open peering policy, however we do not peer with networks that are not present in PeeringDB, or whose records are not maintained.\nF-Root reserves the right to discontinue peering with networks whose listed contacts do not respond appropriately to reports of network abuse, or those that do not follow the technical guidelines for propagation of the F-Root IP networks, as described below.\nContact Information Peering requests peering@isc.org NOC e-mail noc@isc.org (preferred over phone calls) NOC phone +1 650 423 1310 (emergencies only) Peering Locations and Details We keep our PeeringDB records current with our up-to-date locations and peering details.\nTo see the list of F-Root locations, you can find ISC\u0026rsquo;s organization record here in PeeringDB (or search for \u0026ldquo;Internet Systems Consortium\u0026rdquo;).\nOn the right hand side you\u0026rsquo;ll see a long list of \u0026ldquo;Networks\u0026rdquo;, including all F-Root locations.\nFinding common IXPs Wherever available, F-root nodes peer with the routeservers at each IXP at which we are present. If a network operator feels the need to set up direct BGP sessions, we are happy to peer directly.\nIn order to find IXPs at which you cuould peer with F-Root, be aware that F-Root uses a different peering ASN at each location. As such the well known peerfinder tool will not find the intersections.\nInstead, we have written and published a Python tool named pdb-intersect that you can install and run. It essentially starts at the organization level and drills down from there.\nOnce installed, it\u0026rsquo;s run simply by:\npdb-intersect \u0026lt;your-asn\u0026gt; 3557 It will output ASN and IP info for each common interconnection point.\nRouting Policy Networks that filter based on Routing Registry objects should use AS-FROOT (RaDB) as the filter object. (The local peering ASN usually has an AS\u0026lt;asn\u0026gt;:AS-SET record, but since it\u0026rsquo;s not exporting anything, that\u0026rsquo;s of no use.)\nISC operates a few Global Nodes for the F-Root service at core locations. Global nodes advertise 192.5.4.0/23 and 2001:500:2e::/47 into the default-free zone via Tier 1 transit operators. We do not exchange these prefixes with peers.\nThe majority of F-Root nodes are Local Nodes that are intended to serve the local community. Local nodes advertise the more specific prefixes 192.5.5.0/24 and 2001:500:2f::/48 to peers, and set the well-known community NO_EXPORT.\nAll peers are required to ensure that they do not announce our prefixes into the default-free zone. If this requires the use of alternative BGP communities this can be arranged on request to our NOC.\n","permalink":"/froot-peering/","tags":null,"title":"BGP Peering"},{"categories":["BIND","Open Source","Security"],"contents":"BIND 9 is Unaffected by the DNSBomb We have been researching and testing a DNS vulnerability report called the DNSBomb (CVE-2024-33655). After extensive testing, we have concluded that this technique does not pose a significant threat to BIND 9 operators. Our testing showed that with recommended configurations:\nThe packet amplification factor for the client(s) is 1x (i.e. none). The novel part of this attack \u0026ndash; spreading out the queries over time, and then tricking the resolver into responding within a short time window \u0026ndash; is already sufficiently mitigated by existing limits that are in place. The DNSBomb Attack The DNSBomb attack was described by Xiang Li from Tsinghua University NISL Lab and responsibly disclosed to multiple DNS software development teams for assessment. This is a pulsing denial-of-service attack (PDOS) that leverages features such as query rate limits, query timeouts, and maximum response size settings to create timed floods of responses using a maliciously designed authority and a vulnerable recursive resolver. Their attack also incorporates response amplification, by using specially-crafted zones on the authority to return large responses.\nThere are three critical steps to construct the DNSBomb attack:\nAccumulating DNS queries (by extending the timeout window to delay responses artificially) Amplifying responses (using query aggregation on the authoritative server and domains specially crafted to return large answers) Collecting and \u0026ldquo;pulsing\u0026rdquo; responses to the target resolver in large batches Testing the Limits in BIND 9 After testing some variants of this attack vs BIND 9, we found multiple ways to limit or reduce the impact of DNSBomb on BIND 9 resolvers. Most of these are configured by default, but there are also rate limits that can be configured to further reduce the effectiveness of this attack.\nThese limits are effective against the DNSBomb attack:\nclients-per-query / max-clients-per-query are set by default to 10 and 100, respectively. They limit the total number of outstanding clients (requests) for any single query. In the worst case, there can be 100 outstanding requests. recursive-clients is configured by default to 1000. This limits the total number of outstanding requests. After reaching 90% of this limit (i.e. 900 requests by default), the oldest requests are dropped and can no longer contribute to the effectiveness of this attack. responses-per-second is a rate limit which can be configured to further mitigate this attack. It limits the number of UDP responses for a particular query per network segment. Notes on the Amplification Factor Amplification attacks are a common problem in the DNS. DNS responses are typically much larger than the corresponding queries, so the main difference between regular query/response patterns and abusive ones is the amplification factor. Wide variations in query volume can be seen under normal operation, so we look for amplification factors of at least two orders of magnitude when considering whether an amplifier is actually a potent attack vector.\nHere\u0026rsquo;s the worst case scenario for a resolver with a default configuration under the DNSBomb attack:\nThe attacker (spoofing the victim\u0026rsquo;s address) sends 1000 packets over time (up to 10 seconds), while the authoritative server (under the attacker\u0026rsquo;s control) withholds the answer for that amount of time. Once the attacker\u0026rsquo;s authoritative server responds, the resolver answers all outstanding queries for the attacker\u0026rsquo;s domain. Assuming the answer is crafted to utilize the maximum packet size that will be transmitted over UDP, each answer could be 1232 bytes long. For 1000 outstanding queries, the total amount of data sent could be up to 12 MB sent within a very short time window. The paper argues that this means a significant bandwidth amplification factor. While that may be true under ideal conditions, this attack doesn\u0026rsquo;t scale. Due to the recursive-clients limit (which defaults to 1000), any new requests will cause the oldest ones to be dropped, no longer contributing to the bandwidth amplification factor.\nIn other words, if the attacker sent 10k or 100k packets instead, the short data burst on the victim\u0026rsquo;s side would still be just 12 MB of data.\nIn real-world conditions, the attack is likely to be even less efficient, since the full impact would require that the attacker be the only client using this resolver and thus be able to utilize the entire recursive-clients limit for itself. If there are other clients, they\u0026rsquo;re likely using at least some of this quota, making the attack even less effective.\nReferences ISC\u0026rsquo;s BIND 9 GitLab issue #4398. This issue will be hidden until the original reporter\u0026rsquo;s technical paper is published, currently planned for May 22, 2024 at the IEEE Symposium on Security and Privacy. The reporter, Xiang Li\u0026rsquo;s, homepage: https://lixiang521.com/ Network and Information Security Laboratory, Tsinghua University: https://netsec.ccert.edu.cn/ ISC\u0026rsquo;s Software Defect and Security Vulnerability Disclosure Policy ISC\u0026rsquo;s CVSS Scoring Guidelines The BIND 9 Administrator Reference Manual includes a section on deploying BIND securely. We assume these best practices are followed in assessing vulnerabilities. ","permalink":"/blogs/2024-dnsbomb/","tags":["BIND","Security"],"title":"Rate Limits in BIND 9 Effective vs. DNSBomb Vulnerability"},{"categories":null,"contents":"These are a few of the DNS-related tools, websites, and books we have heard of. Please visit our ISC DHCP tools and Kea/IPv6 tools pages if those are relevant to your needs.\nPlease note that it is your responsibility to check the licensing terms of any software you download. We have not tried all of these; many of them have simply been added on the suggestion of some of our users, so we can\u0026rsquo;t make any specific claims about suitability or quality. Some of the information is quite old, but may still be generally useful.\nWe welcome notifications for additions, deletions, or broken links; please let us know if something we are linking to is inaccurate. Send any suggestions or corrections to web-request at isc dot org.\nThe tools are sorted into four categories:\nDiagnostic tools Provisioning tools Testing tools, monitoring Miscellaneous other things Useful guides, books, and how-to articles 1. Diagnostic tools Zonemaster - Zonemaster, developed by IIS and AFNIC, is a web-based zone checker. It will run a number of health checks on a domain, including DNSSEC but also basic checks for accessibility, consistency, delegation and basic security. Zonemaster can also be used to test an undelegated domain (for example, prior to registering it). Zonemaster will save the history from prior scans, useful for troubleshooting problems. DNS Viz - Highly recommended. DNSViz provides a visual analysis of the DNSSEC authentication chain for a domain name and its resolution path in the DNS namespace, and it lists configuration errors detected by the tool. Verisign DNSSEC Debugger - A DNSSEC debugger. DIG tool for Apple iOS - Free, on the App Store. Created by Ray Bellis of ISC, this tool is a port of the dig tool included with the BIND distribution to the Apple iOS platforms (iPhone and iPad). Sadly this is no longer maintained, and is based on BIND 9.14.2, before we implemented DOH and DOT, so these features are not supported. dig on the web - an implementation of ISC\u0026rsquo;s dig tool hosted on a web page. dig GUI - another implementation of dig hosted on a web page. No idea who operates this. ISC DNS Checker - Free, on the App Store. Also by Ray Bellis, this is a resolver protocol-conformance tester for Apple IOS. This is not actively maintained. EDNS Compatibility Tester - BIND developer Mark Andrews created this site and monitors the on-going scanning of the DNS root, top-level domains, and several lists of top Internet domains. Check your own domain or see the historical performance of the domains we monitor. BIND 9 rndc module for NodeJS - Ray Bellis of ISC published this library for communicating with BIND 9.9 and later versions via the rndc interface. DNS OARC software tools - multiple tools here, some of which are listed separately below dns_parse takes as input a pcap of DNS data and produces a complete, trivially parsable, human readable ASCII version of the same data. Last updated in 2016. Capture DNS - A simple program to capture and show DNS queries DNS client - DNS Client is an ASP.NET Core web application hosted on https://dnsclient.net/. It can also be downloaded as a portable web app and run locally on Windows, Linux and MacOS. Supports DoH and DoT. DNS Looking Glass - This site, maintained by Frederic Cambus, enables you to see what people querying your site from different locations (different resolvers) would see. This tool is updated to issue queries via DNS over HTTP. DNS Traversal checker - IPv4 only, but we find it a very useful tool. dnstop - traffic analyzer - Written by Duane Wessels, published by The Measurement Factory. dnstop is a libpcap application that parses either a live capture or tcpdump saved file and displays your DNS traffic in table form, showing source, destination, query types, response codes, etc. Shodoflo - Python listener for dnstap - Stream your BIND query logs via dnstap to this Python listener from Fred Morris NLNET Labs Drill - Drill is a useful debugging/query tool for DNSSEC. Passive DNS - Passive DNS is a tool to collect DNS records passively to aid incident handling, Network Security Monitoring (NSM), and general digital forensics. Cycle Hunter - Zone checker tool that detects cyclic dependencies in DNS zones. From SIDN. 2. Provisioning tools Vinyl DNS - VinylDNS manages millions of DNS records supporting thousands of engineers in production at Comcast. The platform provides fine-grained access controls, auditing of changes, a self-service user interface, secure RESTful API, and integration with infrastructure automation tools like Ansible and Terraform. OctoDNS - OctoDNS helps manage DNS records across multiple providers, including Dyn (Oracle), and AWS. Records are stored in a git repository. OctoDNS-Netbox plugin This plugin works with the following project, Netbox DNS This MIT-licensed plugin maintains name server, zone and dns record information, and it says it supports BIND views! DNS Control - DNSControl is a system for maintaining DNS zones. It has two parts: a domain specific language (DSL) for describing DNS zones plus software that processes the DSL and pushes the resulting zones to DNS providers such as Route53, CloudFlare, and Gandi. It can talk to Microsoft ActiveDirectory and it generates the most beautiful BIND zone files ever. It runs anywhere Go runs (Linux, macOS, Windows). ARSoft.Tools.Net - \u0026ldquo;This project contains a complete managed .Net SPF validation, SenderID validation and a dns client and dns server implementation written in C#.\u0026rdquo; The list of RFCs supported is impressive. VIM editor syntax highlighter - This tool was recently updated (September 2020) and re-announced on the bind-users mailing list. From Steve Egbert. Send notify This Python script allows you to specify the SOA serial in the NOTIFY message as well. Perl script to send notify Denominator - Denominator from Netflix \u0026ldquo;is a portable Java library for manipulating DNS clouds.\u0026rdquo; Denominator has pluggable back-ends, including AWS Route53, Neustar Ultra, DynECT, Rackspace Cloud DNS, OpenStack Designate, and a mock for testing. It appears this has not been updated since 2016. SPF Record Validation - A web-based tool recommended on BIND-users. \u0026ldquo;These tools are meant to help you deploy SPF records for your domain. They use an actual RFC 7208 compliant library (pyspf) for tests and will dynamically test for processing limit errors (no other testers I\u0026rsquo;m aware of do this).\u0026rdquo; ZSU - From the Comprehensive Perl Archive Network, a Zone Serial Update tool by Andras Salamon. This tool appears to be abandoned, but it may still be useful. nsdiff - Posted on BIND-users: \u0026ldquo;My program nsdiff is useful for copying dynamic zones from from an existing master to a new master without faffing around with rndc freeze. On the new master, run nsdiff -m oldmaster -s localhost myzone | nsupdate -l and it will axfr the zone from the oldmaster and copy it into the new.\u0026rdquo; - Tony Finch 3. Other tools (Testing, performance measurement, monitoring) DNS Shotgun - This realistic DNS benchmarking tool supports multiple transport protocols (including encrypted DNS).\nDNS dist - Described in this blog post.\nDNSPERF \u0026amp; RESPERF - These open source tools from Nominum are classic DNS performance testing utilities. DNSPERF is now being maintained by DNS-OARC.\nRespdiff - This tool allows the tester to compare responses from two different resolver implementations or versions. Not recommended for resolver performance testing.\ndnsjit - Tool for capturing, parsing and replaying DNS traffic.\npktvisor - \u0026ldquo;An observability agent for analyzing high volume, information dense network data streams and extracting actionable insights directly from the edge.\u0026rdquo;\nLogeater - this tool from Carsten Strotmann aggregates BIND9 logs for easier analysis\nStork agent - Prometheus exporter for BIND 9 See the Stork documentation at ReadTheDocs, specifically the agent installation section. Pre-built packages (look for isc-stork-agent) on Cloudsmith.\nGrafana dashboard for BIND 9 - Posted by Christian Calin, ~2017.\nPrometheus exporter for BIND 9 - Published by Digital Ocean in 2016.\nDNSWitness - includes 2 tools, DNSdelve, an active measurement framework which uses a list of domains (for instance all the subdomains of a TLD) and can query them for various things such as the presence of SPF records, the IP addresses of the name servers, etc. Also DNSmezzo, a passive measurement tool. Located in front of a name server (recursive or authoritative), it parses the data and put them in a SQL DBMS for easier analysis.\nMunin BIND9 Stats plug-in - Check out the other stuff in Shumon Huques Github repo while you\u0026rsquo;re there\nFlamethrower - functional test tool for DNS by @NS1\nDROOL - replay PCAPS, from DNS-OARC\nzmap/zdns - cli tool for high speed dns lookups\nThe DNS Measurement Factory tools - The Measurement Factory offers several tools for DNS, including dnsdump, a Perl script like tcpdump, and several applications for collecting and displaying DNS statistics; dnstop, DSC (DNS Statistics Collector), and Traffic Gist.\nNet DNS - Net::DNS is a DNS resolver implemented in Perl. It allows the programmer to perform nearly any type of DNS query from a Perl script.\nQuery-loc - A program to retrieve and display the location information in the DNS. From Stéphane Bortzmeyer. It uses the algorithms described in RFC 1876 (and RFC 1101 to get the network names). You can find examples of networks which implement this scheme in the ADDRESSES file.\nRoot Canary - An online tool to see which DNSSEC-signing algorithms your resolver can validate.\nMicrosoft ccTLD Registry Security Scan - apply via email - At the DNS-OARC Spring 2014 workshop in Warsaw, Microsoft presented a new free service they are offering to ccTLDs. Microsoft is offering a scan of ccTLD registry sites for a range of common security vulnerabilities. Since launching this, they have scanned 7 ccTLDs already and found over 130 serious security problems. The results are reported privately to the ccTLD requesting the scan.\nDNSSEC Zone Key Tool - ZKT is a tool to manage keys and signatures for DNSSEC-zones.\nGetDNS - At the Spring 2014 DNS-OARC workshop, NLNet Labs introduced their new DNS API, GetDNS. This API, and the library that implements it, are intended to provide access to DNSSEC validation to higher-level (non-DNS) applications, such as, for example, DKIM.\n4. Miscellaneous other interesting things. FIRST DNS Abuse Techniques Matrix - comprehensive hello-dns A significant experimental DNS implementation aimed at teaching the basic requirements of the protocol. See blogs at PowerDNS and RIPE Labs for an overview. DRINK An experimental authoritative DNS server intended for dynamic answers (answers depending, for instance, on the client). It is just for fun and it does not pretend to replace existing programs. But you may want to read its source code, or use its online demo, at dyn.bortzmeyer.fr. DNS Root Visualizer A hosted tool that displays the DNS root systems on a world map. Developed by Ray Bellis of ISC. 5. Useful guides, books, and how-to articles US CISA - Encrypted DNS Implementation Guidance NIST SP 800-81 Secure Domain Name System (DNS) Deployment Guide Version 2, from the US Department of Commerce, National Institute of Standards and Technology (NIST), (several versions, check the dates)). Version 3, draft April 2025, significantly updated! Note that there is a new Secure Technical Implementation Guideline (STIG) checklist tool, search for BIND 9 (the \u0026lsquo;BIND DNS\u0026rsquo; checklist appears to be older). Team Cymru Secure BIND Template, updated August 2012. DNSSEC Troubleshooting tutorial (using dig), delivered at NANOG52 by Michael Sinatra, Energy Sciences Network (ESNET). How to configure your BIND resolvers to lie using Response Policy Zones (RPZ), by Jan-Piet Mens, April 2011. NZOG 2013 DNSSEC Workshop, taught by Joe Abley and Phil Regnauld; someone helpfully posted several how-tos from the class. BIND-users FAQ, by Doug Barton. How to get the most from this resource. Unofficial comp.protocols.tcp-ip.domains FAQ. \u0026ldquo;Running BIND9 in a chroot cage using NetBSD 1.6.2\u0026rdquo;, by Tim Roden. Article from the GnuDIP project \u0026ldquo;Having Your Own Domain Name with a Dynamic IP Address.\u0026rdquo; Article (in French) from Nicholas Cuissard about issues arising from the conflict between DHCPv4 client-identifier and DHCPv6 DUID. \u0026ldquo;RFC 2317 Delegations for IPv4 Blocks Less Than /24,\u0026rdquo; by Doug Barton. Cricket Liu\u0026rsquo;s classics, DNS and BIND Cookbook and DNS and BIND on IPv6 on Amazon.com (Kindle edition). Ron Aitchison\u0026rsquo;s DNS book \u0026ldquo;ProDNS and BIND\u0026rdquo; and DNS for Rocket Scientists. Michael W. Lucas\u0026rsquo;s DNSSEC Mastery, which was recommended on bind-users. The DHCP Handbook, 2nd Edition, by Ralph Droms and Ted Lemon. ISOC DNSSEC Deployment Statistics. APNIC Chief Scientist Geoff Huston\u0026rsquo;s presentations on his research, quite a bit of which is on the DNS. List of Free Public DNS Servers (possibly useful when troubleshooting your own) from Lifewire.com. Council of European Top-Level Domains, note the handy summaries of all of the IETF and ICANN meetings you didn\u0026rsquo;t manage to attend. ISOC DNSSEC Resources. Actively maintained resource with videos, how-to\u0026rsquo;s and deployment data. A comprehensive listing of DNSSEC-related tools is available from DNSSEC.Net. IANA DNS Parameters. ","permalink":"/dns-tools/","tags":null,"title":"DNS Tools"},{"categories":null,"contents":"Where can I find the current copy of bind.keys? Until early 2023, this page contained links to the bind.keys file, for use when starting named. However, when using any currently supported version of BIND 9, the correct defaults are pre-compiled and there is no need to configure bind.keys manually.\nWe encourage all users to install a current version of BIND 9.\n","permalink":"/bind-keys/","tags":null,"title":"Current Root Trust Anchors"},{"categories":null,"contents":"These are a few of the DHCP-related tools, websites, and books we have heard of. Please visit our DNS tools and Kea/IPv6 tools pages if those are relevant to your needs.\nPlease note that it is your responsibility to check the licensing terms of any software you download. We have not tried all of these; many of them have simply been added on the suggestion of some of our users, so we can\u0026rsquo;t make any specific claims about suitability or quality.\nWe welcome notifications for additions, deletions, or broken links; please let us know if something we are linking to is inaccurate. Send any suggestions or corrections to web-request at isc dot org.\nSee this page for help in Migrating from ISC DHCP to Kea Tools designed to work with ISC DHCP Glass - ISC DHCP GUI Monitoring and Alerting: Written by Chris Miles in NodeJS, this new application supports DHCPv4 only (at the moment). MIT licensed.\nISC DHCP Lease Analysis: Tool for shared network and pool range usage analysis, designed for high performance with a lot of leases. Written in C by Sami Kerola.\nISC DHCP Lease Filter: This Python library provides a filter on top of python-isc-dhcp-leases. Martijn Braam.\nISC DHCP SNMP pool tracker: dhcpd-snmp is an extension for the Net-SNMP agent and the ISC DHCP server. It allows you to monitor and track the address usage of your dynamic IP address pools through SNMP.\nBT Diamond IP IPv6 resources\nExcellent article posted on the Internet Society website, \u0026ldquo;IPv6 Address Planning: Guidelines for IPv6 Address Allocation\u0026rdquo;, by Tim Rooney. Simple on-line subnet calculator. IPv6 address planner. DHCP Probe: dhcp probe attempts to discover DHCP and BootP servers on a directly attached Ethernet network. A network administrator can use this tool to locate unauthorized DHCP and BootP servers.\nISC Forge: This is an open source validation environment for fully automated validation of DHCPv4 and DHCPv6 protocols compliance using Python, Lettuce, and Scapy. The project is hosted on GitHub.\nBooks The DHCP Handbook, 2nd Edition, by Ralph Droms and Ted Lemon.\n","permalink":"/dhcp-tools/","tags":null,"title":"DHCP Tools"},{"categories":null,"contents":"This Reporting Guide relates to the Code of Conduct we have adopted for public fora ISC controls, where the user and developer community discusses our open source projects.\nCode of Conduct - Reporting Guide If you believe someone is violating our Code of Conduct we ask that you report it to the Internet Systems Consortium by emailing conduct@isc.org. All reports will be kept confidential. In some cases we may determine that a public statement will need to be made. If that\u0026rsquo;s the case, the identities of all victims and reporters will remain confidential unless those individuals instruct us otherwise.\nIf you believe anyone is in physical danger, please notify appropriate law enforcement first. If you are unsure what law enforcement agency is appropriate, please include this in your report and we will attempt to notify them.\nIf you are unsure whether the incident is a violation, or whether the space where it happened is covered by this Code of Conduct, we encourage you to still report it. We would much rather have a few extra reports where we decide to take no action, rather than miss a report of an actual violation. We do not look negatively on you if we find the incident is not a violation. And knowing about incidents that are not violations, or happen outside our spaces, can also help us to improve the Code of Conduct or the processes surrounding it.\nIn your report please include:\nYour contact info (so we can get in touch with you if we need to follow up) Names (real, nicknames, or pseudonyms) of any individuals involved. If there were other witnesses besides you, please try to include them as well. When and where the incident occurred. Please be as specific as possible. Your account of what occurred. If there is a publicly available record (e.g. a mailing list archive, Gitlab issue or a public IRC logger) please include a link. Any extra context you believe existed for the incident. If you believe this incident is ongoing. Any other information you believe we should have. What happens after you file a report? You will receive an email from the ISC Code of Conduct Working Group acknowledging receipt immediately. We promise to acknowledge receipt within 24 hours (and will aim for much quicker than that). The Code of Conduct Working Group includes the current Director of Technical Support, Darren Ankney; the current Director of BIND 9 Development, Ondřej Surý; the current Director of DHCP Development, Tomek Mrugalski; and the current Director of Marketing and Product Management for ISC, Vicky Risk.\nThe working group will immediately meet to review the incident and determine:\nWhat happened. Whether this event constitutes a code of conduct violation. Who the bad actor was. Whether this is an ongoing situation, or if there is a threat to anyone\u0026rsquo;s physical safety. If this is determined to be an ongoing incident or a threat to physical safety, the working group\u0026rsquo;s immediate priority will be to protect everyone involved. This means we may delay an \u0026ldquo;official\u0026rdquo; response until we believe that the situation has ended and that everyone is physically safe.\nOnce the working group has a complete account of the events they will make a decision as to how to response. Responses may include:\nNothing (if we determine no violation occurred). A private reprimand from the working group to the individual(s) involved. A public reprimand. An imposed \u0026ldquo;vacation\u0026rdquo; (i.e. asking someone to \u0026ldquo;take a week off\u0026rdquo; from a mailing list). A permanent or temporary ban from some or all ISC public communication spaces (mailing lists, Gitlab) A request for a public or private apology. We\u0026rsquo;ll respond within one week to the person who filed the report with either a resolution or an explanation of why the situation is not yet resolved.\nOnce we\u0026rsquo;ve determined our final action, we\u0026rsquo;ll contact the original reporter to let them know what action (if any) we\u0026rsquo;ll be taking. We\u0026rsquo;ll take into account feedback from the reporter on the appropriateness of our response, but we don\u0026rsquo;t guarantee we\u0026rsquo;ll act on it.\nFinally, the working group will make a report on the situation to the ISC Executive Team. The team may choose to issue a public report of the incident.\nWhat if your report concerns a possible violation by a committee member? If your report concerns a current member of the Code of Conduct working group, you may not feel comfortable sending your report to the committee, as all members will see the report.\nIn that case, you can make a report directly to the ISC President, Jeff Osborn, at jeff at ISC dot org.\n","permalink":"/conductreporting/","tags":null,"title":"ISC Code of Conduct"},{"categories":null,"contents":"Public Access to ISC\u0026rsquo;s GitLab Instance Click on one of the links below or navigate to gitlab.isc.org to access BIND 9 and Kea DHCP source code hosted at ISC\u0026rsquo;s GitLab. In order to submit an issue, you will have to create an account in our Gitlab system, so that we can communicate with you regarding your request, gather more details, and discuss remediation. It is not necessary to create an account to simply download source code or browse the repositories.\nBIND 9\nKea DHCP\nISC DHCP\nStork management utility\nPublic releases are always available from the downloads page on this website and the ISC FTP site.\nFor more information on contributing to ISC\u0026rsquo;s open source software, please read the related article in our Knowledgebase or the contributor information in our BIND 9 or Kea DHCP GitLab instance.\nBIND 9 To clone the repository for BIND 9, use:\n$ git clone https://gitlab.isc.org/isc-projects/bind9.git Development branch names are of the form bind-9.X, where X represents the second number in the BIND 9 version number. So, to check out the BIND 9.20 branch, use:\n$ git checkout bind-9.20 Whenever a branch is ready for publication, a tag will be placed of the form v9.X.Y. The 9.20.9 release, for instance, is tagged as v9.20.9.\nThe branch in which the next major release is being developed is called main.\nKea DHCP To clone the repository for Kea, use:\n$ git clone https://gitlab.isc.org/isc-projects/kea.git ISC DHCP To check out ISC DHCP source, type:\n$ git clone https://gitlab.isc.org/isc-projects/dhcp.git This will create a local directory called dhcp which will contain all branches of the source code. To look at a particular branch, you can check that branch out from your local clone of the repository. For example, to check out the most current source for the 4.2 branch, go into the dhcp directory and run:\n$ git checkout v4_2 To later update your repository, go into the dhcp directory and type:\n$ git checkout [BRANCH] git pull \u0026hellip;where BRANCH is the branch you are interested in viewing. Released branches are named v4_1_esv (for DHCP 4.1-ESV-Rx), v4_2 (for DHCP 4.2.x), etc. The branch in which the next major release is being developed is called master.\n","permalink":"/sourceaccess/","tags":null,"title":"ISC Source Code Trees"},{"categories":null,"contents":"These are a few of the Kea- and IPv6-related tools, websites, and books we have heard of. Please visit our DNS tools and DHCP tools pages if those are relevant to your needs.\nPlease note that it is your responsibility to check the licensing terms of any software you download. We have not tried all of these; many of them have simply been added on the suggestion of some of our users, so we can\u0026rsquo;t make any specific claims about suitability or quality.\nWe welcome notifications for additions, deletions, or broken links; please let us know if something we are linking to is inaccurate. Send any suggestions or corrections to web-request at isc dot org. We also have a separately-maintained list of open source hook libraries for Kea on a GitLab wiki page.\nISC-DHCP Migration Tools This page provides links and descriptions of several tools ISC provides to users of ISC-DHCP, who are migrating to Kea.\nStork This is a graphical web-based dashboard for Kea. It monitors server status, pool utilization, high-availability status and includes a simple log viewer. Recent Stork versions can also manage host reservations and subnets, in Kea servers with the Host Commands and Subnet Commands hook libraries installed. Stork is open source, from ISC, and is available in tarballs or packages. Stork includes a Prometheus exporter and Grafana template.\nEndure, a new DHCP diagnostics utility written in Rust.\nKea Docker \u0026ldquo;The ISC Kea DHCP server, Control Agent and DDNS running inside individual Docker containers.\u0026rdquo; Developed by Jonas Alfredsson.\nSystemd Init Script In ISC\u0026rsquo;s GitLab you can find this contributed patch for systemd for use with Kea, developed by Rasmus Edgar Jørgensen. \u0026ldquo;Systemd handles the daemonization and actually seems to be a perfect fit for how Kea services behave atm.\u0026rdquo;\nDHCPlb - Load Balancer Engineers at Facebook wrote this dhcpv4/v6 load balancer to use with the Kea DHCP server as an anycasted DHCP service.\nKea \u0026ldquo;Run Script\u0026rdquo; Hook - newer one This a hook for the Kea DHCP server that allows an external script to be run at various points in the processing of DHCP requests and responses. From Zorun on Github.\nKea \u0026ldquo;Run Script\u0026rdquo; Hook This generic hook calls an external script at any/all of the hook points supported by Kea. Written by Baptiste Jonglez.\nIntegrate Kea with NetBox This blog post explains how to import available addresses from NetBox into Kea. Written by Michael Schwartzkopff.\nIntegrate Kea with NetBox \u0026ldquo;This plugin allows you to view Kea status, leases and subnets in NetBox.\u0026rdquo;\nKea-NetBox sync daemon \u0026ldquo;netbox-kea-dhcp is a one-way sync daemon that exports NetBox prefixes, IP ranges and IP addresses to respectively DHCP subnets, pools and host reservations.\u0026rdquo;\nKea-Knot integration Docker \u0026ldquo;This is a Docker image containing Kea (for DHCP) and Knot DNS (for authoritative DNS; not a resolver). Kea can provide load balanced DHCP while Knot supports dynamic updates from Kea DHCP.\u0026rdquo;\nKea-telegraf-stats This script re-formats the statistics output from the Kea API into a telegraf-friendly format.\nKea memfile-database Importer This script makes it easy to migrate from a CSV-backed Kea to a database-backed Kea. From Munroe Sollog.\nSplunk-enterprise-addon-for-Kea Developed by Mikael Bjerkeland.\nKea Python Interpreter This enables creating Kea hooks with Python. By davejohncole.\nDHCP Probe DHCP Probe attempts to discover DHCP and BootP servers on a directly attached Ethernet network. A network administrator can use this tool to locate unauthorized DHCP and BootP servers.\nISC Forge This is an open source environment for fully automated validation of DHCPv4 and DHCPv6 protocols compliance using Python, Lettuce, and Scapy.\nKea Prometheus Exporter This exports Kea metrics in the Prometheus Exposition Format.\nKea \u0026ldquo;Show Leases\u0026rdquo; Script\nIPv6 Address Planning: Guidelines for IPv6 Address Allocation This is an excellent article posted on the Internet Society website, by Tim Rooney.\nSimple on-line subnet calculator\nIPv6 Address planner\ndhtest, a Linux dhcp traffic generator\n","permalink":"/kea-tools/","tags":null,"title":"Kea \u0026 IPv6 Tools"},{"categories":null,"contents":"Software from ISC ISC has developed several open source tools that our users and customers may find helpful. Please click the links below to access any of these tools. This is unsupported software and is provided without warranty. Please check the repo before downloading to determine if the software is actively maintained.\nAtlas vis\nWeb-based visualisation of the response time of the DNS Root System, as measured from RIPE Atlas probes\nBIND 9 rndc for NodeJS\nThis module implements the BIND9 rndc management protocol and is compatible with BIND 9.9 and later.\ndig app for IOS\nThe dig app is a port of the ISC BIND 9 dig utility to the Apple IOS operating system. Download it for free from the App Store!\nDNS compliance testing\nProvides tools to allow Registries and Registrars (amongst others) to check the DNS protocol compliance of the servers they are delegating zones to.\nDNS echo\nThis is a program that replies to DNS packets with the same packet with the response flags modified according to RFC 1035.\ndnsgen\nISC\u0026rsquo;s DNS packet generator, dnsgen, runs under Linux.\nethq\nEthQ displays an auto-updating per-second count of the number of packets and bytes being handled by each specified NIC, and on multi-queue NICs also shows the per-queue statistics.\nForge on GitLab\nISC Forge is a DHCP test tool.\nKea Quick configuration\nThis is an experimental wizard that generates a Kea configuration file.\nMutex tracer\nPerflab on GitHub\nPerflab is a collection of DNS server performance testing tools.\npython rndc\nThis library implements the RNDC protocol natively in Python, to allow direct control of BIND instances from within Python programs.\nISC-RADIUS framework for NodeJS\nThis is a framework for implementing RADIUS servers (add your own business logic) and for adding RADIUS client support to NodeJS applications.\nresolver-check app for IOS\nThe resolver-check app is a BIND 9-based utility packaged for the Apple IOS operating system. Download it for free from the App Store!\nStork LDAP hook library\nAn experimental hook (plugin, extension) for Stork server that allows user authentication using LDAP (Lightweight Directory Access Protocol) credentials.\nSoftware from other organizations Below are some links to several software projects developed or maintained by organizations outside of ISC. We provide these links as a service to our customers and try to keep them up-to-date, but we cannot guarantee their accuracy. If you are interested in having your software published here, please contact us.\nNetwork Time Protocol\nInterNetNews Package\nIRRToolset\nOpenReg\nlibbind\nBIND 10\nOHGF\nLWDS-Lite\nAFTR\nNetwork Time Protocol (NTP), managed by Network Time Foundation NTP (Network Time Protocol) is an implementation of the NTP-related RFCs (and proposed RFCs) and provides an openly redistributable reference implementation of the software, including:\nAn NTP client/server (ntpd, per RFC\u0026rsquo;s 5905 and 5906) An SNTP client (sntp, per RFC 5905) An SNMP subagent for NTP (ntpsnmpd, per RFC 5907) Tools for verifying, monitoring and configuring the proper operation of the NTP server The ntpd server is used on a significant number of computers and routers on the Internet, in orbit around the planet, and even in outer space, providing a robust and stable platform on top of which an organization\u0026rsquo;s timing architecture can be built.\nMore information on the software, legal issues, security advisories, etc. can be found on the NTP project website.\nThe official reference implementation of NTP is maintained, and its documentation is produced, by the Network Time Foundation. Read about the latest Security information related to NTP.\nIf you would like to donate to the NTP Project, please use this link.\nLinks and Resources\nDownload NTP NTP Consortium NTP public services project NTP R\u0026amp;D project InterNetNews Package (INN) The InterNetNews package (INN) is a complete Usenet system. It includes innd, an NNTP server, and nnrpd, a newsreading server. INN separates hosts that feed you news from those that have users reading news. INN was originally written by Rich Salz (read the USENIX paper Rich wrote about it here). ISC took over development of INN in 1996 after Rich was unable to continue supporting it and many variants of the software were forming. INN is currently maintained by Russ Allbery and Julien Élie.\nIf you are interested in receiving notice of future releases of INN, you can subscribe to the inn-announce mailing list.\nLinks and Resources\nDownload INN Russ Allbery\u0026rsquo;s INN site Join the inn-workers mailing list Internet Routing Registry Toolset (IRRToolSet) The \u0026ldquo;Internet Routing Registry Toolset\u0026rdquo; (IRRToolSet) project at ISC is an activity proposed by the RIPE NCC. This project has been migrated from the USC Information Sciences Institute, where it was developed in 1997-2001 as the \u0026ldquo;Routing Arbiter ToolSet\u0026rdquo; (RAToolSet) project. As the RAToolSet is no longer developed by ISI but is used worldwide, the RIPE NCC proposed to migrate this project to the RIPE NCC in order to continue its development and support. The original name of the project was preserved during the transition process, but was eventually changed to IRRToolSet. Currently, the RIPE NCC has transferred maintenance of this toolset to ISC, who will be accepting code from the community and providing code maintenance.\nGoals\nIRRToolSet is a suite of policy analysis tools to operate with routing policies in RPSL [RFC 2622] format, registered in the Internet Routing Registry (IRR). The main goal of the project is to make routing information more convenient and useful for network engineers, by providing tools for automated router configuration, routing policies analysis, and maintenance.\nFunctionality\nThe project consists of the following tools:\nRtConfig: analyzes the routing policies registered in the Internet Routing Registry (IRR) and produces router configuration files; CIDRAdvisor: suggests safe cidr aggregates (i.e. those that do not violate any policy constraints) that an Autonomous System (AS) can advertise to each of its neighbour ASes; peval: low level policy evaluation tool that can be used to write router configuration generators; prtraceroute: prints the route and policy information packets take to a network host; prpath: enumerates a list of paths between Autonomous System and specified destination; aoe: C++/Tcl/Tk program that displays the aut-num object for the specified Autonomous System; roe: C++/Tcl/Tk program that lists the routes registered by the specified autonomous system; rpslcheck (prcheck): syntax-checks the aut-num object for Autonomous System registered in the Internet Routing Registry (IRR). Current Status\nIRRToolset version 5.0.1 was released by ISC on 8 Aug 2012.\nThe project is implemented in C++ on a UNIX platform. It has been ported successfully to many platforms including Sun Sparc stations running SunOS 4.1.3 or Solaris, FreeBSD, DEC Alphas running OSF/1, BSDI, Linux and NetBSD.\nVersions starting from 4.7.2 contain aoe as well as a number of bug fixes.\nSome RPSLng (RPSL with IPv6 and multicast extentions) has been implemented in IRRToolSet. The prototype release contains peval, rpslcheck, and RtConfig tools which work with the new dictionary; visit ISC\u0026rsquo;s download site for the release. Any comments and suggestions about the prototype are appreciated.\nRe-implemented peval and RtConfig are also available as a CGI script.\nLinks and Resources\nProject home on GitHub Download IRRToolSet Join the irrtoolset mailing List OpenReg OpenReg is an implementation of a domain registry, such as might be used by top-level domain operators to manage the delegation of domains in a \u0026ldquo;shared registry\u0026rdquo; environment. It is a completed project and no longer in active development; the latest release date is April 2003.\nOpenReg:\nSupports the Extensible Provisioning Protocol (EPP), the IETF standards-track protocol for interaction between registries and registrars defined in the following RFCs: Extensible Provisioning Protocol (EPP) Extensible Provisioning Protocol (EPP) Domain Name Mapping Extensible Provisioning Protocol (EPP) Contact Mapping Extensible Provisioning Protocol (EPP) Host Mapping Guidelines for Extending the Extensible Provisioning Protocol (EPP) Extensible Provisioning Protocol (EPP) Transport Is designed and debugged as a distributed multi-process system; Supports PostgreSQL and is designed to accommodate to very large registries; Publishes zone files to be served using BIND; Gathers comprehensive profiling and load statistics; Is published as free software, under a BSD-style license. Links and Resources\nDownload OpenReg libbind At present, NetBSD maintains libbind code, now known as \u0026ldquo;netresolv\u0026rdquo;. For more information, visit http://wiki.netbsd.org/individual-software-releases/netresolv/.\nhttps://downloads.isc.org/isc/libbind/cur\nBIND 10 Completed project, no longer in active development at ISC. Now hosted on GitHub under the name \u0026ldquo;Bundy\u0026rdquo;.\nhttps://downloads.isc.org/isc/bind10/latest\nOHGF Completed project, no longer in active development (latest release date November 2013).\nhttps://downloads.isc.org/isc/ohgf/\nLWDS-Lite Completed project, no longer in active development (latest release date January 2013).\nhttps://downloads.isc.org/isc/lwds-lite/\nAFTR Completed project, no longer in active development (latest release date December 2010).\nhttps://downloads.isc.org/isc/aftr/\n","permalink":"/othersoftware/","tags":null,"title":"Other Software"},{"categories":null,"contents":"Block Malware, Phishing, Ransomware Sites and Botnets Most modern electronic crime and network abuse relies on the Domain Name System. Protection against these threats must include the Domain Name System.\nThis kind of firewall can also protect against internal threats coming from compromised devices. As users are taking their devices out of your controlled network, acquiring infections, and then bringing them back in, enterprises need to increase their vigilance.\nA DNS firewall selectively intercepts DNS resolution for known-malicious network assets including domain names, IP addresses, and name servers. Interception can mean rewriting a DNS response to direct a web browser to a “walled garden,” or simply making the malicious network assets invisible and unreachable.\nA DNS firewall can block: Phishing - When a user clicks on a link in an email, for example from a fake banking site, you can intercept the lookup of that site. Malware - When a user attempts to navigate to an IP address known to host malware, you can redirect them to a site of your own with instructions on scanning their computer. Ransomware - Ransomware is a type of malware in which someone takes over assets on your network and blocks access to them until you pay a ransom. This is a rapidly growing threat. Botnet command and control sites - When devices inside your network attempt to contact suspected botnet command central, you can drop the queries and log them for analysis and followup. Identify infected machines - By analyzing the query logs, you can track down the machines in your network that are attempting to contact these abuse sites, and clean up any infections or botnet code. DNS Reputation Data Providers DNS Response Policy Zones (RPZ) was invented at ISC and first implemented in BIND, but it is an open and vendor-neutral standard for the interchange of DNS firewall configuration information.\nEach of the vendors listed below offers proprietary data streams based on their own research. It is possible to subscribe to more than one data feed from a vendor, and to subscribe with multiple vendors.\nThe easiest way to assess the efficacy of one of these blocklists is to subscribe for a trial period, and see how many abuse queries are subsequently blocked.\nCIRA DNS Firewall Farsight Security - now part of Domain Tools Spamhaus Surbl SWITCHF ThreatStop Cloud services that use DNS Firewalls For users who do not want to run their own DNS systems, there are numerous public cloud-based DNS services that include DNS firewalls, such as the examples below:\nCloudflare DNS Firewall service NIC.lv DNS Firewall service Quad9 Threat blocking Route 53 DNS Firewall Here\u0026rsquo;s how to get started Most RPZ data providers offer a free trial period. Sign up to try one or more data feeds from security experts Deteque, Farsight Security, SURBL or ThreatSTOP, or other public and private sources, based on your own needs and suspected threats. Configure your BIND 9.9 or later server to receive updates to the RPZ zone from your chosen data providers using industry standard DNS mechanisms fully supported in BIND (IXFR/AXFR). ISC offers technical support contracts for BIND, or you are welcome to ask the BIND user community for help. RPZ filters and policies in BIND are continuously updated with live threat intelligence data curated by the security data provider. Add a white list locally for any blocked zones you want to unblock for your users. Clients using your resolver will be blocked and redirected when they try to access malware and abuse sites before they can be infected or compromised. Measure the effectiveness of your blocklist(s) by tracking the increase in responses that are dropped or redirected. Identify infected client machines based on the client IDs of machines making these queries. After you SEE RESULTS of using the RPZ firewall in your own network decide whether you want to continue subscribing to the service. ISC does not test, certify, or recommend specific RPZ vendors. RPZ data feeds from any of the vendors above should work with BIND.\nTechnical Details The rules in a Response Policy Zone consist of triggers or filters that identify what responses to modify, and policy actions to apply to these responses. Each rule can use one of five policy triggers and specify one of eight policy actions.\nResponse Policy Triggers by the query name. [QNAME] by an address which would be present in a truthful response. [RPZ-IP] by the name or address of an authoritative name server responsible for publishing the original response. [RPZ-NSDNAME and RPZ-NSIP] by the IP address of the DNS client. [RPZ-CLIENT-IP] Response Policy Actions to synthesize a “domain does not exist” response. [NXDOMAIN] to synthesize a “name exists but there are no records of the requested type” response. [NODATA] to redirect the user via a CNAME to a walled garden. [CNAME example.org] to replace the response with specified data. [Local Data] to require the client to re-submit the query via TCP. [CNAME rpz-tcp-only] to exempt the response from further policy processing. [DISABLED, CNAME rpz-passthru] to drop the query, without any response to the client. [CNAME rpz-drop] The most common use of a DNS firewall is to poison a domain name, IP address, name server name, or name server IP address. Poisoning is usually done by forcing a synthetic “domain does not exist” response. This means if you know a list of known “phishing” domains you could make these names unreachable by your customers or end users just by adding some firewall policy into your recursive DNS server, with a trigger for each known “phishing” domain, and an action in every case forcing a synthetic “domain does not exist” response. Or you could use a data replacement action such as answering for these known “phishing” domains with the name of a local web server that can display a warning page. Such a web server would be called a “walled garden.”\n","permalink":"/rpz/","tags":null,"title":"Response Policy Zones (RPZ)"},{"categories":null,"contents":"BIND Version 4: UC Berkeley and DEC The Berkeley Internet Name Domain (BIND) package was originally written in the early 1980s at the University of California at Berkeley as a graduate-student project, under a grant from the US Defense Advanced Research Projects Administration (DARPA). Versions of BIND through 4.8.3 were maintained by the Computer Systems Research Group (CSRG) at UC Berkeley. Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou made up the initial BIND project team.\nAfter that, additional work on the software package was done by Ralph Campbell. Kevin Dunlap, a Digital Equipment Corporation (DEC) employee on loan to the CSRG, worked on BIND for 2 years, from 1985 to 1987. Many other people also contributed to its development during that time, including Doug Kingston, Craig Partridge, David Waitzman of BBN Technologies, Smoot Carl-Mitchell, Mike Muuss, Jim Bloom, and Mike Schwartz. BIND maintenance was subsequently handled by Mike Karels and O. Kure.\nBIND versions 4.9 and 4.9.1 were released by Digital Equipment Corporation (now Hewlett-Packard Enterprise). Paul Vixie, then a DEC employee, became BIND\u0026rsquo;s primary caretaker. Paul was assisted by Phil Almquist, Robert Elz, Alan Barrett, Paul Albitz, Bryan Beecher, Andrew Partan, Andy Cherenson, Tom Limoncelli, Berthold Paffrath, Fuat Baran, Anant Kumar, Art Harkin, Win Treese, Don Lewis, Christophe Wolfhugel, and others.\nPaul Vixie became BIND\u0026rsquo;s principal architect/programmer and sponsored the development of BIND version 4.9.2 through his own firm, Vixie Enterprises.\nBIND Versions 4 and 8: The Beginnings of ISC ISC was founded in 1994 by Rick Adams, Paul Vixie, and Carl Malamud, expressly to provide a home for BIND development and maintenance. BIND versions from 4.9.3 onward were developed and maintained by ISC, with support provided by ISC\u0026rsquo;s sponsors. As co-architects/programmers, Bob Halley and Paul Vixie released the first production-ready version of BIND version 8 in May 1997.\nBIND versions 4 and 8 are officially deprecated. Sources are still archived on ftp.isc.org.\nBIND 9: 2000 - Today There have been more than 43 core developers who have contributed significantly to BIND 9 development and maintenance over its 20+ year lifespan. BIND version 9 was released in September 2000 and was a major rewrite of nearly all aspects of the underlying BIND architecture. After Bob Halley and Michael Graff began work on the initial version of BIND 9, Mark Andrews, David \u0026ldquo;Tale\u0026rdquo; Lawrence, Andreas Gustafsson, Brian Wellington, James Brister, and Michael Sawyer joined the team and each made significant contributions to early versions of BIND 9. Of the initial development team, only Mark Andrews still works on BIND. After Mark, Evan Hunt, who joined the core team in 2007, is the current team member with the longest tenure.\nBIND 10 Development, 2009 - 2014 In 2009, ISC began an attempt to rewrite BIND from the ground up, with BIND 10. It was intended as a replacement to and improvement on BIND 9, based on an entirely new application framework. BIND 10 was a collaborative project with a group of major funders and technology contributors, primarily from the ccTLD user segment. It is generally believed to have suffered from the \u0026ldquo;second system\u0026rdquo; problem, but the truth is probably more complex than that. In 2014, Shane Kerr, the last Project Lead for BIND 10, gave a talk on \u0026ldquo;The Decline and Fall of BIND 10\u0026rdquo; at the RIPE 68 community meeting.\nDuring the BIND 10 development years, Mark Andrews and Evan Hunt remained co-maintainers of BIND 9, while the rest of the ISC DNS team focused on \u0026ldquo;the next system.\u0026rdquo; In 2014, ISC ended development on BIND 10 and refocused on investing in BIND 9 instead.\nCommunity In 2016, ISC announced, after community discussion on the bind-users mailing list, that BIND 9.11.0 and later versions would be distributed under the MPL 2.0 license. In July 2017, ISC opened the BIND bug database to external users; in February 2018, ISC migrated the BIND project to a new self-hosted GitLab instance and invited external contributors to join.\nThe BIND 9 \u0026ldquo;History\u0026rdquo; file, included in the distribution, lists the major changes in every release since version 9.2.0.\nRefactoring BIND 9 In 2017, ISC embarked on a program of refactoring to update or replace some of the most complex and difficult-to-maintain parts of BIND. We refactored our Response Policy Zones (RPZ) implementation to \u0026ldquo;disentangle\u0026rdquo; it from the Red-Black Tree (rbtdb). We refactored several core functions, including query_find() and answer_response(), to simplify them; they were both very complex functions. query_find had a McCabe score of 453 before refactoring, meaning there were 453 different code paths to consider when making changes to that function! BIND 9 was first written to run on a wide range of operating systems at a time when they did not have common networking interfaces, which led to increasing complexity over the years. In 2019-20, ISC replaced the home-grown network interface in BIND with the modern and widely implemented libuv component.\nCurrent BIND 9 Development Team The current BIND development team is led by Ondřej Surý, and includes Mark Andrews, Evan Hunt, Michał Kępień, Matthijs Mekking, Michal Nowak, Artem Boldariev, Petr Špaček, Aram Sargsyan, Tom Křı́žek, Štěpán Balážik, and Aydın Mercan.\nThe complete \u0026ldquo;Authors\u0026rdquo; file from the BIND distribution is extensive but incomplete. BIND\u0026rsquo;s commit history does not capture the real extent of community contributions from outside the core team, because only a relatively small group of ISC staff had commit privileges; until we adopted the current GitLab development system, the committer, rather than the author, received credit. However, despite the missing commit history, notable long-time external contributors include Tony Finch (before he worked for ISC) and Petr Menšík.\n","permalink":"/bindhistory/","tags":null,"title":"The History of BIND"},{"categories":["BIND","Open Source"],"contents":"The BIND 9.16 branch is approaching EOL. This branch was first published on March 6th, 2020, four years ago. For some time now, we have been adding only critical fixes to BIND 9.16, but that will end with the April release.\nWe encourage users running 9.16 or (gasp) 9.11, to upgrade to 9.18 at this time.\nThe 9.18 branch has consistently out-performed the 9.16 branch, and we are confident that it is more stable than 9.16. One of our support engineers has prepared a knowledgebase article on updating from 9.16 to 9.18 which may be useful to you as you plan your migration. We also attempt to track the major feature differences from branch to branch in the BIND Significant Features matrix, although this is far less detailed than the release notes. Please note that BIND 9.16 was the last branch that included support for the Windows(tm) operating system.\nFor an overview of our release plan, we maintain another knowledgebase article. This was updated earlier this month, to move the start of future new stable branches from Q1 to Q2. The problem with starting a new stable branch in Q1 is, after the long holiday quiet period, we always have a number of important fixes and changes we need to release before we can start a new stable branch. We are currently projecting that our next stable branch, BIND 9.20, will be released late in Q2.\nFor your convenience, we also maintain our planned EOL dates listed next to each software release on https://www.isc.org/download/. Updates such as this one are typically posted on the bind-announce mailing list, which is open to all.\n","permalink":"/blogs/2024-bind-9.16-eol/","tags":["BIND"],"title":"BIND 9.16 Branch is approaching EOL"},{"categories":null,"contents":"This list includes products that provide a management interface for, or integration with, ISC open source software. Some of these companies have a support relationship with ISC: those are highlighted with links to their websites. These organizations get advance notice of security vulnerabilities, enabling them to prepare patched versions of their products when the vulnerability is published. They also are able to escalate bugs to ISC for resolution. We encourage you to patronize companies with a support relationship with ISC.\nPlease note that we cannot guarantee that any of these products contain currently supported versions of ISC software and we do not know what modifications they may have made to the open source we publish. The newer versions of our open source are released under the MPL 2.0 license, which imposes some responsibilities on creators of \u0026rsquo;larger works\u0026rsquo;. If you use one of the products below, you may be entitled to a copy of the source code for the product.\nPublisher Product ISC Software ApplianSys DNSBox BIND 9, ISC DHCP, Kea BlueCat Networks BlueCat DDI Platform, Micetro BIND 9, ISC DHCP, Kea Canonical MAAS - Metal as a Service ISC DHCP Cygna Labs Diamond IP, VitalQIP BIND 9, ISC DHCP, Kea Efficient IP IPAM for Linux, SolidserverDDI, Hybrid DNS BIND 9, ISC DHCP F5 BIG-IP BIND 9 InfoBlox Infoblox DDI BIND 9, ISC DHCP, Kea DHCP Netgate TNSR, pfsense - Kea Kea DHCP Planisys PDNS BIND 9 TCPWave TCPWave IPAM BIND 9, ISC DHCP Send updates! We cannot keep this up to date without your help! This list is based on information provided by users, which we have attempted to confirm by looking at the relevant websites.\n","permalink":"/commercialproducts/","tags":null,"title":"Commercial Products for Managing ISC Open Source"},{"categories":null,"contents":"All current versions of BIND 9, excepting for the BIND -S Edition, all current versions of Kea DHCP, ISC DHCP versions 4.4.0 and greater, and all versions of Stork are available under the Mozilla Public License 2.0.\nThe BIND -S edition is licensed under a commercial license. Some Kea Hooks are licensed under the MPL 2.0, others are commercially licensed. The Kea Administrative Reference Manual clearly states which hooks are open source. Mozilla publishes a helpful FAQ about the MPL 2.0 license.\nThe MPL 2.0 license is approved by the Open Source Initiative. BIND 9 versions prior to BIND 9.11.0b, ISC DHCP versions prior to ISC DHCP 4.4.0, and Kea versions prior to Kea 1.0.0 were released under the ISC License, the text of which is shown below. The ISC License is registered with and approved by the Open Source Initiative. Permission to use, copy, modify, and/or distribute this License text for any purpose with or without fee is hereby granted. (like SPDX: 0BSD)\nText of the ISC License:\nCopyright © 2004-2013 by Internet Systems Consortium, Inc. (\u0026ldquo;ISC\u0026rdquo;)\nCopyright © 1995-2003 by Internet Software Consortium\nPermission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.\nTHE SOFTWARE IS PROVIDED \u0026ldquo;AS IS\u0026rdquo; AND ISC DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n","permalink":"/licenses/","tags":null,"title":"ISC Open Source Software Licenses"},{"categories":["Kea","Open Source","DHCP"],"contents":"Many Kea DHCP users use ISC packages, so we want to let everyone know that we will be cleaning up some older repositories on Cloudsmith. The repositories hosting development branches 1.9 and 2.1 will be decommissioned on February 29, 2024, and the repository for the 2.3 releases is scheduled for removal on March 29, 2024. Moreover, the repository for the 2.5 releases will be phased out following the public availability of the stable 2.6.0 release.\nAdditionally, beginning with the 2.7.X release series, we will be launching a new repository on Cloudsmith, named kea-dev, where all other future development releases will be stored.\nRepositories for stable releases will remain unaffected.\nShould you require packages from the 1.9, 2.1, or 2.3 releases, we strongly recommend creating a backup copy.\nIt\u0026rsquo;s possible to use the wget tool to download particular packages, e.g:\nwget https://dl.cloudsmith.io/public/isc/kea-2-1/deb/debian/pool/stretch/main/p/py/python3-isc-kea-connector_2.1.7-isc20220627172414/python3-isc-kea-connector_2.1.7-isc20220627172414_all.deb\nOr use a package manager: apt-get download \u0026lt;package-name\u0026gt;; dnf download \u0026lt;package-name\u0026gt;;\nThe source code for each release can always be accessed and will continue to be available on our downloads site at https://downloads.isc.org/isc/kea/.\n","permalink":"/blogs/kea-cloudsmith-updates/","tags":["Kea","DHCP"],"title":"Kea Cloudsmith Repository Updates"},{"categories":["BIND","Open Source"],"contents":"Overview PROXYv2 protocol support has been added in the BIND 9.19 development branch. The PROXYv2 protocol is designed with one thing in mind: passing transport connection information (including, but not limited to, source and destination addresses and ports) to a backend system across multiple layers of NAT, TCP, or UDP proxies and load balancers. The protocol achieves its goal by prepending each connection or datagram with a header reporting the other side\u0026rsquo;s connection characteristics. Effectively, from the point of view of the backend (in our case, BIND), it is a controllable way to spoof peer and incoming interface connection information.\nWith the addition of this feature, BIND can act as a backend to the front-end proxies implementing the PROXYv2 protocol. The list of such proxy implementations includes but is not limited to dnsdist and HAProxy. Many cloud infrastructure providers also implement the PROXYv2 protocol in their in-house front-end software.\nThe PROXYv2 protocol is supported for all DNS transports currently implemented in BIND, including DNS over UDP and TCP (Do53), DNS over TLS (DoT), and DNS over HTTP(S) (DoH). The same applies to dig as well, as we wanted to ensure that DNS operators who want to use the PROXYv2 protocol have a reliable tool for diagnosing their deployments. Moreover, dig might be one of the few such tools, if not the only one, that implements PROXYv2 for so many DNS transports.\nThere are currently two versions of the PROXY protocol - text-based PROXYv1 and binary-based PROXYv2. Also, there are protocols with similar purposes, like Simple Proxy Protocol (SPP) for UDP from Cloudflare. BIND is capable of accepting PROXYv2 only, so if we mention the PROXY protocol without a version, PROXYv2 is implied if not stated otherwise.\nPROXYv2 in BIND uses the source and destination addresses and ports extracted from PROXYv2 headers instead of the real source and destination addresses and ports, as seen by the operating system. With very few exceptions (which we will discuss later), from the point of view of BIND, these are real - as a result, you will see them in the logs, and the ACL functionality of BIND will use them during matching and so on. In short, almost all aspects of BIND functionality that need source and destination addresses and ports will use the ones provided via the PROXYv2 protocol. Of course, the source and destination addresses of the real endpoints are preserved internally and are used for the actual data exchanges.\nThe above is done to fulfill the PROXY protocol\u0026rsquo;s goal of filling the backend server\u0026rsquo;s internal structures with the information collected by the front-end proxy, which the server would have been able to get by itself from the operating system if the client were connecting directly to the server instead of via a front-end. That provides a level of transparency that has many architectural benefits, some of which are discussed in detail in the PROXY protocol specification. Let’s discuss them briefly.\nApplications for PROXYv2 Firstly, it becomes possible to chain multiple layers of front-ends (like proxies and firewalls) and always present the original connection information (like source and destination IP addresses and ports). With PROXY, the complexity of the forwarding infrastructure in front of BIND does not matter, as it makes it possible to preserve and pass the original information about endpoints through it to the backend. It might consist of just one front-end instance running on the same machine or local network, or be a complex, multi-layer infrastructure with many forwarders.\nSecondly, this feature makes it easier to deploy elaborate infrastructures with large front-end farms in front of big backend farms, possibly shared between multiple sites; when using the PROXY protocol, the servers do not need to know routes to the client, only to the closest proxy that forwarded the connection. That provides benefits over the so-called transparent proxies, because using them usually implies that there is only one return path for the data; in cases when both front-end proxies and backend servers support the PROXYv2 protocol, it is easier to provide multiple return paths while preserving the ability to pass the endpoints data to the backends. That might be particularly useful for large DNS resolver operators.\nThirdly, using PROXY eases IPv4 and IPv6 integration; in particular, it is absolutely fine to receive a request over IPv4 and forward it over the chain of intermediates that are connected over IPv6 only (or vice versa). In that case, with proper configuration, the backend server will receive the original endpoint information.\nFourthly, PROXY support allows relatively transparent transport protocol conversion (from the backend server perspective), including TLS termination. There are front-end implementations that allow transport protocol conversion; for example, it is possible to configure a dnsdist instance to serve DNS over HTTP/2 or DNS over QUIC (starting from version 1.9.X), while the DNS backend server might not have these transports enabled or not have support for them. Similarly, HAProxy is notorious for allowing HTTP protocol version conversion support and is also often used for TLS termination. However, simply placing such front-ends in front of a backend (e.g. BIND) involves losing the original endpoint information. That is exactly the problem that enabling PROXYv2 both on the front-end and backend can solve. This feature is useful for both small and large installations alike. In particular, it allows serving DNS over transports currently not supported by BIND, like QUIC (DNS over QUIC/DoQ), in a very transparent way.\nLet\u0026rsquo;s discuss how we can use the PROXYv2 protocol in BIND now with a few examples.\nPreserving Connection Information with PROXYv2 in BIND To demonstrate the use of PROXYv2 in BIND, we should discuss a couple of things about the front-ends we are going to use for demonstration, namely HAProxy and dnsdist, and some details about the PROXYv2 protocol.\nThe specification advises sending a PROXYv2 header at once after establishing a TCP connection. In fact, it only provides very few details about UDP (mostly the necessary constant definitions) and is not concerned about using PROXY over TLS.\nRegarding PROXY over UDP, it seems that most software developers have agreed that a PROXY header should precede data in a datagram. It seems to be a very logical choice in this case. dnsdist works like this and is one of the few choices that use and support PROYXv2 over UDP.\nRegarding PROXY over TLS, it is trickier. As noted above, the specification of the PROXY protocols is mainly concerned with TCP proxies. As TLS is used on TCP connections, the PROXY protocol can be used on TLS connections just as it is described in the specification: that is, by sending a PROXY header in front of any data related to TLS handshake. In this case, the PROXY header itself is, as one could expect, not encrypted. Most of the software implementing PROXY over TLS support works like this, including HAProxy.\nOne can see a possible problem with this approach, as relatively sensitive data is transmitted in clear text over what is expected to be an otherwise secure, encrypted connection. In order to resolve this problem, the dnsdist authors implemented PROXYv2 slightly differently: instead of sending plain PROXY header data before the TLS handshake over TCP, they decided to send an encrypted PROXY header as the first chunk of data after the handshake. One could argue that this is not described in the specification and, thus, deviates from it. On the other hand, all sensitive data is protected and cannot be collected or analyzed by any intermediaries. Also, it is worth noting that, as of the latest versions (since 1.8.X, 1.9.0), dnsdist started adding support for accepting non-encrypted PROXYv2 protocol messages as well.\nBIND and dig, which strive to be useful in all deployment scenarios, support both plain and encrypted mode for the PROXY protocol. As expected, only plain mode is available for non-encrypted DNS transports, while the ones based on TLS support both.\nConfiguration Configuring PROXYv2 support in BIND requires first establishing which protocols BIND will listen for, then adjusting ACLs to determine which clients have access.\nHere, you can see a couple of examples for the first step:\noptions { # Enable PROXYv2 for Do53 (both TCP and UDP) listen-on port 53 proxy plain { any; }; # Enable proxy for DoT, use encrypted PROXY # headers for compatibility with dnsdist listen-on port 853 proxy encrypted tls local-tls { any; }; # Enable proxy for DoH, unencrypted proxy # headers for compatibility with HAProxy (and other tools) listen-on port 443 proxy plain tls local-tls http local-http-server { any; }; }; By default, this configuration is not enough to allow accepting PROXYv2. The second step is to set access control lists (ACLs) associated with PROXYv2, namely allow-proxy and allow-proxy-on.\nallow-proxy - defines an ACL for the client addresses allowed to send PROXYv2 headers. allow-proxy-on - defines an ACL for the interface addresses allowed to accept PROXYv2 headers. These are the only ACLs that work with real endpoint addresses and ports - everything else will use the information carried by the PROXYv2 protocol.\nOne could ask why enabling PROXYv2 via listen-on statements is not enough. That is done for security reasons. As mentioned above, the core idea behind PROXYv2 is to provide a way for spoofing source and destination addresses and ports, which has security implications, as a client might make it seem as if a request is coming from someone else.\nFor example, it is customary, albeit not recommended, to configure BIND to allow recursive queries for clients on the local networks only, while serving zones on public networks. If such a BIND instance allows the PROXY protocol on a public interface, then a remote client could run recursive queries over a public interface, effectively turning the instance into an open resolver. That is only one example that comes to mind, but in general, PROXY allows bypassing other ACLs, too. That, of course, is undesirable and might be unexpected for the operator.\nOn the other hand, every deployment is different, so it is impossible to provide a default that would fit everyone, especially without sacrificing security. As a result, BIND does not allow PROXY for any clients by default; BIND defaults to the following:\noptions { ... allow-proxy { none; }; allow-proxy-on { any; }; ... }; In other words, by default, the PROXY protocol is not allowed for any client address but is allowed on any interface. For PROXY to be accepted, a request should pass checks by both of the ACLs.\nFor example, it is possible to allow PROXY for clients on local networks where BIND has network interfaces by configuring it this way:\noptions { ... allow-proxy { localnets; }; ... }; As another example, it is possible to allow PROXY for clients on a loop-back interface only by configuring it this way:\noptions { ... allow-proxy { 127.0.0.1; ::1; }; ... }; And now a somewhat \u0026ldquo;inverted\u0026rdquo; example. Let\u0026rsquo;s imagine a situation where we know that it is safe for BIND to accept the PROXY protocol on a particular interface (192.168.1.10 in this example):\noptions { ... allow-proxy { any; }; allow-proxy-on { 192.168.1.10; }; ... }; Of course, you can configure these two ACLs to match your infrastructure needs exactly. What we do not recommend doing, though, is allowing PROXY on publicly accessible network interfaces due to the security concerns described above.\nWe should note that enabling PROXYv2 on a listen-on statement will prevent the corresponding listener from accepting \u0026ldquo;regular\u0026rdquo; DNS queries that arrive without a PROXYv2 header.\nNow that we have learned enough about PROXYv2 support in BIND, we can provide more examples of using BIND with both dnsdist and HAProxy. Both of these proxying front-ends have unique functionality, and although some of the capabilities overlap, they can also complement each other.\nConclusion This document is an introduction to the new PROXYv2 protocol support in BIND. We hope that it fulfills its purpose and, even more than that, can serve as a guide on how the PROXYv2 protocol works in general when applied in a DNS setting. We have omitted some details, but if you are more interested in the PROXYv2 protocol itself, we suggest you read the relatively short specification. We have a longer version of this article, with a number of additional configuration examples, in the ISC Knowledgebase.\nWith the help of the front-ends that support PROXYv2, you can have a very complicated, yet transparent to BIND, network of forwarding entities. Both dnsdist and HAProxy support accepting and forwarding PROXYv2. To learn how to make these tools accept PROXYv2, please take a look at setProxyProtocolACL for dnsdist and accept-proxy (that can be added to the bind statement) in the case of HAProxy. As of this blog post, BIND is capable of receiving PROXYv2, but there is no support for sending connection details when interacting with other servers (e.g. for forwarding). We may address this in future releases if there is demand for this feature.\nImplementing PROXYv2 support for all DNS transports that BIND supports was a major task and required a significant redesign of some of the DNS transports. It is particularly connected to Stream DNS, BIND’s new unified DNS transport for DNS over TCP and DNS over TLS. Having PROXYv2 support will benefit both large and small installations of BIND and, maybe, even allow you to think of your infrastructure in different ways, as transparently passing information about remote peers to backends is a very powerful mechanism. We hope that you will find it useful.\nReferences Proxyv2 Specification Knowledgebase article with more configuration examples listen-on statements description; allow-proxy option; allow-proxy-on option. New PROXY related options in dig. dnsdist documentation on Proxyv2 ","permalink":"/blogs/proxyv2-2024-02/","tags":["BIND","Privacy","DoH"],"title":"PROXYv2 Support in BIND 9"},{"categories":["BIND","Security","Open Source"],"contents":"In the aftermath of yesterday\u0026rsquo;s BIND announcement of seven new CVEs, one of them with a fairly wide impact, BIND users might be wondering why ISC publishes so many security vulnerabilities. While it is undeniable that BIND is mature software, written before today\u0026rsquo;s level of network abuse was even contemplated, we are actually rather proud of our vigiliance and honesty in looking for vulnerabilities, and acting to address them promptly and transparently. One such effort to proactively look for potential security issues was our 2023 security code audit.\nIn mid-2023, ISC hired X41 D-Sec, a German code-auditing firm with specific expertise in finding security vulnerabilities, to audit ISC\u0026rsquo;s BIND 9 source code. We were aware of their earlier audit of NLnet Labs\u0026rsquo; Unbound software, so we knew they had some recent exposure to DNS-specific issues. We thought that a team of dedicated auditors would bring fresh eyes and a concentrated focus that might help us significantly improve BIND security. At any rate, we were lucky enough to be able to afford the expense and thought it was worth trying.\nThe auditors broke the work into two sprints of 30 days each, using two engineers. We began with an online workshop to discuss areas to focus on, and we set up a shared Mattermost instance for quick and easy communications between the auditors and the BIND engineering team. When the auditors were ready to begin each sprint, we asked them to download and analyze the latest development version at the time.\nTheir methods included both manual inspection and running some automated analyses. They started by looking for the most commonly found types of vulnerabilities in C code, such as memory management problems. Where they found a weakness, they then searched for other instances of a similar pattern or problem. As they said in their report, \u0026ldquo;The code was inspected for security issues such as integer overflows, OOB reads and writes, hash table degradation, reference counter overflows, use-after-free and double free bugs. Beyond looking for common C level implementation bugs, the team investigated the codebase for bug patterns that might affect BIND 9 specifically.\u0026rdquo;\nFollowing the manual inspection, the X41 team ran some static analysis and fuzzing. We already run a lot of these automated tools ourselves, so we were not surprised that they didn\u0026rsquo;t turn up anything significant. The auditors offered to share their scripts with us, should we want to reuse them in the future.\nOur experience with the audit was positive: the auditors did not require a lot of help from ISC\u0026rsquo;s engineering team; we merely shared some ideas about where we thought they might find some issues, and they went off and did whatever they thought made sense. Although we were available to answer questions, they didn\u0026rsquo;t have many. We were not surprised at the number of issues they raised, although most of them did not rise to the level of something we would rush to address. The issues they did find were real issues, rather than false positives. We were pleased that they created issues in our public-facing GitLab repository for all of the issues they found, and provided excellent documentation to help us see the problems.\nOverall the audit reinforced what we already knew, which is that BIND\u0026rsquo;s security issues are mostly not due to common coding errors, but instead are directly related to the excessive complexity of the DNS standards and therefore the BIND 9 implementation. Most of our CVEs, historically, are vulnerability to distributed denial-of-service (DDOS) attacks by external actors sending some sort of malicious queries that \u0026ldquo;trick\u0026rdquo; BIND into performing some complex or resource-intensive processing or replies.\nThis security audit is only one of the many ways in which we try our best to ensure that BIND 9 is secure and free from vulnerabilities.\nRead the full report of the audit here.\nUsers of BIND who wish to subscribe to our Advance Security Notification (ASN) service are encouraged to contact our sales team.\n","permalink":"/blogs/2024-bind-audit/","tags":["BIND","security"],"title":"BIND 9 Security Audit"},{"categories":["BIND","Security","Open Source"],"contents":"Yesterday we published BIND 9.16.48, 9.18.24, and 9.19.21. These releases mitigate several vulnerabilities that are described in our announcement. Two of these vulnerabilities were multi-vendor, pan-DNS concerns.\nThese two CVEs are:\nCVE-2023-50387: KeyTrap - Extreme CPU consumption in DNSSEC validator and\nCVE-2023-50868: Preparing an NSEC3 closest encloser proof can exhaust CPU resources\nISC would like to thank Elias Heftrig, Haya Schulmann, Niklas Vogel, and Michael Waidner from the German National Research Center for Applied Cybersecurity ATHENE for bringing the KeyTrap vulnerability to our attention and coordinating disclosure. The research team also provided invaluable help testing mitigations for KeyTrap.\nBoth of these are ways in which an abuser can exploit standard DNSSEC protocols intended for DNS integrity by using excessive resources on a resolver, causing a denial of service for legitimate users.\nWhat is the KeyTrap vulnerability? Essentially, the attacker crafts a DNS zone with many DNSKEY and RRSIG records, and a standards-compliant DNSSEC validator tries all possible combinations of DNSKEY and RRSIG records in the vain hope of finding the one combination which matches and validates. If the validator does not implement an explicit limit on the amount of work it will do, it can spend an outrageous amount of resources doing useless work. This attack is also asymmetric - the attacker expends relatively little effort to cause the resolver to expend a lot of effort.\nThis attack is extremely effective against older versions of BIND because DNSSEC validation was historically done in the same processing thread as basically everything else. This design flaw in BIND, together with the unlimited efforts at validation, allowed an attacker to block query processing in BIND for a really long time \u0026ndash; on the order of minutes or possibly hours on a slow CPU.\nKeyTrap mitigation in BIND To mitigate the KeyTrap vulnerability we have made two significant changes to BIND:\nBIND now limits the amount of work spent on DNSSEC-validating a single answer.\nBIND now offloads DNSSEC validation1 into separate threads.\nThis change provides defense in depth: DNSSEC validation no longer blocks processing of other requests. Thanks to this design change in BIND, KeyTrap and other similar DNSSEC-related vulnerabilities will not have as strong an impact on unrelated queries. This change also improves resolver resilience when under random subdomain attacks targeting DNSSEC-signed domains.\nWith these changes, a DNSSEC validation attack which bypasses all other limits will be able to consume no more than approximately 1/2 of CPU capacity on the affected machine, leaving the other half for normal processing.\nNSEC3 closest encloser proof can exhaust CPU (CVE-2023-50868) The effectiveness of this design change is demonstrated by fact that our mitigation for KeyTrap is also effective against another denial-of-service attack published yesterday, CVE-2023-50868.\nAn attacker either selects or creates a DNSSEC-signed zone with NSEC3 parameters configured in excess of the Best Current Practice RFC9276, primarily by using extra iterations, and then launches a random subdomain attack against this zone. Because this Best Practice document is not yet universally followed, resolvers typically accept the extra iterations and spend CPU cycles on SHA1 hashing.\nThese extra SHA1 hash iterations serve as another potential denial-of-service attack vector. Again, the relevant standard, RFC 5155 section 8.3, does not warn about this risk, and multiple implementations did not protect against it. Ironically, we discovered this flaw while testing mitigations for KeyTrap!\nThe novelty of this vulnerability is in the ability to influence not only the zone used, but also the number of retries done by the Closest Encloser Proof algorithm. This allows the attacker to make the attack roughly 125x more effective than previously thought possible.\nLuckily, all versions of BIND released in 2023 already limited the number of NSEC3 iterations to a maxiumum of 150, and the SHA1 hash algorithm is efficient, so the impact on recent versions of BIND is much milder: it requires hundreds of queries per second to exhaust a resolver CPU.\nThe sad part of this story is that if all DNS zone operators had followed the Best Current Practice, resolver implementations could have enforced stricter limits, making this attack totally ineffective.\nWe are not there yet, but we at Internet Systems Consortium are committed to tightening the limits on NSEC3 iterations as soon as practical - and we encourage DNS zone operators to follow the advice. Please read and follow RFC 9276 section 3.1!\nDNS scalability: the good, the bad, and the ugly The fact that it is possible to cause excessive use of resources is not entirely an accident: the DNS protocol specification(s) intentionally do not put explicit limits on many things, including:\nthe number of CNAME records in a chain - which led to the DNS Unchained attack the number of delegations in a recursion loop - which led to NXNSAttack the number of answers to a given source(s) - which led to amplification attacks the number of queries in general - which led to the invention of the random subdomain attack the number of validations - which led to KeyTrap the number of NSEC3 hash iterations - which led to CVE-2023-50868 the number of answers in an ECS-enabled cache - which led to CVE-2023-5680 the number of \u0026hellip; basically anything. Now, you might be asking yourself: were the DNS protocol standards completely bonkers?! And the answer is no!\nIf they had included explicit limits on all of these parameters back in 1987, we would not have been able to scale the DNS all this time without changing the protocol.\nImagine there were a hardcoded limit on the number of CNAME steps in a chain: if the limit was, e.g., \u0026ldquo;2 CNAMEs at most,\u0026rdquo; we would not have been able to construct today\u0026rsquo;s Content Delivery Networks. If there were a limit on the number of DNSKEYs, say \u0026ldquo;at most 2 DNSKEYs,\u0026rdquo; we would not have been able to use multi-signer DNSSEC setups. And we could go on. The lack of limits is on one hand dangerous, and on the other hand it has allowed us to use the same protocol and scale it for 37 years in a row!\nWe did not listen Of course the DNS protocol designers were not stupid, and they foresaw this class of problems. Back in 1987 they provided some generic guidelines for implementers:\nThe recommended priorities for the resolver designer are: 1. Bound the amount of work (packets sent, parallel processes started) so that a request can't get into an infinite loop or start off a chain reaction of requests or queries with other implementations EVEN IF SOMEONE HAS INCORRECTLY CONFIGURED SOME DATA. Indeed, you read it correctly, even back in 1987 when the original DNS specification was written, the top priority was limiting the amount of work done by the implementation! Again and again, researchers continue to show implementers the dark corners where this simple instruction was not followed.\nThe KeyTrap (CVE-2023-50387) and the NSEC3 closest encloser proof CPU exhaustion (CVE-2023-50868) vulnerabilities have joined the ranks of similar CVEs based on tricking DNS implementations into doing excessive and unnecessary work.\nBehind the scenes There will be more attacks of this type, because the DNS protocol is notoriously complex. Luckily, the DNS ecosystem has a healthy mix of implementers who (mostly) are able to openly speak to each other and to coordinate orderly remediation and disclosure of these vulnerabilities.\nMany DNS implementers publicly participate and share operation and development experiences through the DNS Operations, Analysis, and Research Center, or DNS-OARC. We have to thank DNS-OARC for providing a venue for coordination and secure channels which allowed everyone involved to work together on mitigations for these two recent vulnerabilities.\nIf you are doing serious work in the field of DNS and still do not participate in DNS-OARC, it\u0026rsquo;s time to reconsider! Join DNS-OARC and their Mattermost chat server, and attend their excellent workshops!\nConclusion The German National Research Center for Applied Cybersecurity ATHENE research team found an implementation problem in several DNSSEC validators which stems from a lack of imagination on the part of DNS software developers, a lack of any explicit warning in the DNSSEC standards RFC 4035, section 5.3.3, and a failure to follow decades-old, very generic advice.\nFortunately, these attacks that misuse complexity in the DNS can be fixed without changing the protocol fundamentals. The changes implemented in BIND and other DNS systems to mitigate these two vulnerabilities will improve their resilience in future attack scenarios. DNS is in a very different position than, e.g., PGP\u0026rsquo;s SKS key server network, which was basically rendered useless by one practical attack based on complexity.\nThe DNSSEC protocol continues to be secure and provides valuable protection from various attacks on the integrity of the DNS. The best response to this vulnerability is for users to update their DNS software to a patched version.\nBIND users who are interested in receiving advance notification of security announcements involving BIND are encouraged to contact our sales team for more information.\nCVE-2023-50387 References\nBIND 9 release announcement Unbound 1.19.1 release announcement PowerDNS Recursor release announcement Knot Resolver release announcement Dnsmasq release announcement Athene Center technical paper Geoff Huston\u0026rsquo;s blog on the KeyTrap vulnerability ISC References\nReporting Security Vulnerabilities (to ISC) ISC Software Defect and Security Vulnerability Disclosure Policy ISC CVSS Scoring Guidelines BIND 9 Security Vulnerability Matrix History of BIND vulnerabilities by impact from CVE Details In BIND 9.18 and older versions we have implemented a less invasive change. All cache-miss requests are processed in new background threads.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/2024-bind-security-release/","tags":["BIND","security"],"title":"BIND 9 Security Release and Multi-Vendor Vulnerability Handling, CVE-2023-50387 and CVE-2023-50868"},{"categories":null,"contents":"BIND Subscriber Edition End User License Agreement v2.0 (BSEEULA or the License)\nRevised: January 2024\nAGREEMENT 1.1 Defined terms are set out below in Section 8.\n1.2 This Software is supplied to You under the terms of a Subscription and Your rights under this License do not commence until You have obtained a valid Subscription.\n1.3 This License describes rights granted to You by Licensor under copyright as part of the Subscription. The rights granted are world-wide, non-transferable, and without any rights to sublicense. The rights expressly described in this License are the only rights granted and You may not exceed any use expressly authorized here. For the avoidance of doubt no trademark rights are granted.\n1.4 Subscriptions often include disclosure of the Source Code Form of the Software to the subscriber. The Source Code Form of the Software remains the confidential information of Licensor, its Related Persons, or third parties.\nLICENSE GRANTS AND CONDITIONS 2.1 If Your Subscription includes disclosure of the Source Code Form of the Software to You, then during the Term and the Extended Operational Term (and not thereafter) You may: (a) use the Source Code Form of the Software to create Modifications and Larger Works; (b) translate the Source Code Form of the Software, Modifications, and Larger Works into Licensed Executables; and (c) copy the Source Code Form of the Software, Modifications, and Larger Works to the extent reasonably necessary to enjoy the rights granted in this License.\n2.2 If Your Subscription includes disclosure of the Source Code Form of the Software to You, You: (a) must keep the Source Code Form of the Software (including Modifications and Larger Works) confidential; (b) may use the Source Code Form of the Software (including Modifications and Larger Works) only for the purposes of enjoying the rights granted to You in this License; (c) may only disclose the Source Code Form of the Software (including Modifications and Larger Works) to those members of Your staff and individual subcontractors under Your control who reasonably require such access and who owe equivalent confidentiality obligations to You; and (d) must otherwise comply with any additional obligations concerning confidential information set out in the Subscription.\n2.3 During the Term and the Extended Operational Term (and not thereafter) You may copy, install, and operate Licensed Executables solely for the Authorized Purposes in Authorized System Combinations on DNS Servers under Your control.\n2.4 Subject to any rights given to You under applicable law, You may not decompile, reverse-engineer, disassemble, or otherwise attempt to derive the Source Code Form for any Licensed Executable supplied to You under the Subscription.\n2.5 During the Term and the Extended Operational Term (and not thereafter), You may make, retain, and operate a reasonable number of copies of Licensed Executables: (a) for test and analysis purposes in a test environment that is not in productive use; and (b) for backup purposes.\n2.6 This License does not grant You any rights to distribute the Software, Modifications, Larger Works, or any Licensed Executables to any third party (including Your Related Persons).\n2.7 The rights expressly described in this License (subject to any expressly expanded rights described in the Subscription) are the only rights granted to You in respect of the Software.\nINTELLECTUAL PROPERTY Nothing in this License serves to transfer ownership of any copyright or any other intellectual property rights.\nTERM AND TERMINATION Your rights under this License begin at the commencement of the Term. Your rights expire at the times specified in the License.\nMAINTENANCE AND WARRANTY 5.1 Your rights to obtain access to Software updates, patches, and fixes from Licensor, are governed by the terms of the Subscription. Your right to obtain access to such updates patches and fixes ends at the conclusion of the Term.\n5.2 THE LICENSOR HEREBY EXCLUDES FROM THIS LICENSE ANY TERMS CONDITIONS OR WARRANTIES THAT WOULD OTHERWISE BE IMPLIED BY OPERATION OF LAW INCLUDING, WITHOUT LIMITATION, TERMS CONDITIONS OR WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE, OR THAT THE SOFTWARE IS FREE OF DEFECTS, OR NON-INFRINGING.\nLIABILITY UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, STATUTORY DUTY, OR OTHERWISE, SHALL THE LICENSOR OR ANY RELATED PERSON OR ANY OF THEIR DIRECTORS OFFICERS OR STAFF, BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PERSON SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THE PERSONS DESCRIBED IN THIS SECTION MAY RELY UPON THE TERMS OF THIS LIMITATION OF LIABILITY AND THERE ARE NO OTHER THIRD PARTY BENEFICIARIES TO THIS LICENSE.\nMISCELLANEOUS 7.1 This License is governed by and shall be construed in accordance with the law of New Hampshire.\n7.2 You agree that any dispute arising out of or related to this License shall be submitted to the exclusive jurisdiction of the courts of New Hampshire and You consent to that jurisdiction. This jurisdiction provision is without prejudice to the rights of Licensor or any of its Related Persons to take legal action in any other jurisdiction for preliminary injunctive relief or post-judgment enforcement.\n7.3 The rights granted in this License are personal to You and You may not assign novate sublicense or otherwise transfer Your rights to any other person, provided however that You may reasonably transfer the whole of Your rights under the License to a solvent Related Person as part of a genuine reorganization of Your enterprise upon notice to Licensor.\nDEFINITIONS In this License:\n8.1 “Authorized System Combination” means a system in which Related Software operates as the core of a DNS service and the Software provides added functionality to that Related Software, either through the combination of the Software and the Related Software into a Larger Work or otherwise.\n8.2 “Authorized Purposes” means operating DNS services (including resolver and authoritative services).\n8.3“DNS Server” means one instance of a Licensed Executable running on a single computer (whether physical or virtual), and for these purposes a hardware partition, a blade, a virtual machine, and any similar environment that fulfills similar functionality, is each individually counted as a single computer.\n8.4 “Executable Form” means any form of a software work other than the Source Code Form.\n8.5 “Extended Operational Term” means the period of 5 years immediately following the end of the Term.\n8.6 “Larger Work” means a software work that You make by combining the Software with other materials.\n8.7 “License” means the text of this document and the rights granted herein.\n8.8 “Licensed Executable” means: (a) the software work that results when You translate Source Code Form of the Software, including any Modifications or Larger Works that You create pursuant to Your rights in this License, into Executable Form; or (b) the Software supplied to You in Executable Form.\n8.9 “Licensor” means the person described as such in Your Subscription who holds lawful authority to grant the License and any additional rights described in the Subscription.\n8.10 “Modifications” means a software work that You make by adding to, deleting from, or otherwise modifying, the Software.\n8.11 “Related Software” means (a) software supplied by ISC used for the purpose of implementing DNS or other network protocols (such as DHCP), together with (b) any modification to such implementation developed by You or at Your direction.\n8.12 “Related Person” means, in reference to a first person, any other natural or legal person that controls, is controlled by, or is under common control with, that first person.\n8.13 “Software” means the BIND Subscription Edition software supplied to You under the Subscription, and any updates made available to You during the Term.\n8.14 “Source Code Form” means the form of a software work preferred for making modifications.\n8.15 “Subscription” means the commercial agreement pursuant to which You obtained lawful access to the Software from a person who was authorized to enter into that agreement on those terms.\n8.16 “Term” means the term of the Subscription, commencing on the effective date of the Subscription and concluding on the end date specified in the Subscription, and if no end date is specified then concluding upon the first to occur: (a) the first anniversary of commencement; or (b) termination of the Subscription for any other reason.\n8.17 “You” and “Your” refers to the natural or legal person who is the intended and expressly declared beneficial customer for the Subscription.\n","permalink":"/bind-subscriber-edition-license/","tags":null,"title":"BIND Subscriber Edition End User License Agreement"},{"categories":["BIND","Security"],"contents":"We have recently received several questions from customers about the TsuKing vulnerability and whether BIND 9 is susceptible to it. The issue is the result of non-compliance with DNS standards; therefore, BIND and other DNS implementations that are compliant with DNS RFCs are not vulnerable.\nTsuKing is a vector by which it is possible to coordinate vulnerable non-compliant DNS resolvers to cause a potent DNS amplification attack.\nMore information is available at:\nA recent presentation at DNS-OARC The TsuKing site Here at ISC, we are confident that BIND is not vulnerable to a TsuKing attack, because all the TsuKing variants rely on DNS implementations that are not RFC-compliant. Specifically, the DNS-OARC presentation states that \u0026ldquo;not honoring the RD=0 flag\u0026rdquo; and \u0026ldquo;aggressive retry\u0026rdquo; are both factors in TsuKing vulnerability, and BIND has built-in measures to prevent these behaviors. All current versions of BIND adhere strictly to RFCs, and are therefore not under threat by the attack vectors outlined by the TsuKing research team.\nThis attack should serve as an reminder that protocol non-compliance can have severe consequences.\nBIND users who are interested in receiving advance notification of security announcements involving BIND are encouraged to contact our sales team for more information.\n","permalink":"/blogs/tsuking-and-bind/","tags":["BIND"],"title":"TsuKing and BIND"},{"categories":["Kea","Open Source","Annual Reports","DHCP"],"contents":"Kea in 2023 We made a total of 13 Kea DHCP releases in 2023: 10 development releases (2.3.x, 2.5.x), one major stable version (2.4.0), and two maintenance releases (2.2.1 and 2.4.1). Our overall release cycle seems to be working well.\nIn particular, the team is proud of the Kea 2.4.0 release. Packed with new features (multi-threading on by default, three allocation strategies to choose from, bulk leasequery, multiple vendor options, DoH/DoT/DNS-over-\u0026hellip; option, template classes, much faster NETCONF, per-pool stats, and much more), Kea 2.4.0 was our biggest release to date. So far, we have needed only one maintenance release on that branch, 2.4.1 in late November.\nOur Kea support customer count continues to grow with very few departures; we now have more than 70 customers who rely on us for Kea support and advance security notifications. We are very happy to have so much interest in Kea! Looking at our support requests, in 2023 we saw a number of questions related to logging and statistics, and a few users pointing out documentation errors or omissions. Our GitLab repo saw user requests for ping-before-offer, Docker images, and ISC DHCP log emulation.\nMeanwhile, we continue to develop new features for both open source and paid support users, including bulk leasequery, which is gaining popularity; extensions for it have already been requested. The last Kea release in 2023 included the new ping check hook library. The new in-house RADIUS client implementation was completed in 2023, and now we are making it multi-threaded. We are extending our high-availability feature to support a hub-and-spoke model, which we expect will be very popular.\nWe finally released Docker files and container images, which have been requested for several years, and we recently added ARM packages. We are aware of a few shortcomings, but they are on track to be ready for the upcoming stable version, Kea 2.6. Adding a new system architecture is not something that happens often, but we were happy to add this in response to user requests.\nQuality Improvements The DHCP QA team made great progress in 2023: we were able to cut our backlog of missing tests. There\u0026rsquo;s always more to do, but the tests are coming up faster than the development team is able to create new features. With every passing month, we\u0026rsquo;re increasing our test coverage. Also, 2023 was the first year when the tests were often ready before a new feature was finished.\nStork Our Stork project is also coming along well. Stork\u0026rsquo;s release schedule is normally every two months: there were six releases in 2023. The long-term plan for Stork was to do a dashboard first, then turn it into a management tool, and eventually grow it into an IP address manager (IPAM). We\u0026rsquo;re now past the dashboard phase! Stork is able to monitor Kea and to some degree also BIND, but more importantly, it can now manage the two elements of Kea that change most often: subnets and pools. With Stork 1.14.0, administrators can modify existing subnets and pools, including managing them across multiple servers, and soon Stork will also be able to add new subnets and pools.\nIn 2023, Stork got two major new components: the infrastructure for hooks and the first hook that takes advantage of it. These optional libraries have contributed to making Kea a successful project, and we very much hope to repeat that success with Stork. The first hook we implemented for Stork allows it to interact with LDAP and use it for access control. This is important for a couple of reasons: first, LDAP was a popular contributed feature of ISC DHCP, and at least one Kea support customer requested it, so there is a demonstrated need; second, by making the authentication extensible, we have shown that it should not be hard to integrate with other access-control environments. Finally, if we ever get a request to do something very specific for a single customer, we will be able to confine the customer-specific code to a hook.\nStork support is not technically available as a commercial offering, but we help our Kea support customers if they have issues with Stork. We have a few open source users contributing Stork issues in GitLab and we very much appreciate the feedback.\nISC DHCP Since the ISC DHCP project has reached end-of-life, improving access to the Kea Migration Assistant (keama) was our main focus in 2023. We developed a web interface for the Kea Migration utility and did the first standalone release of the tool for use on customer premises. Users now have multiple ways to use this software: the web interface hosted by ISC, deployment on their own premises using Docker, ISC packages, or with the sources, compiling it the classic way. We have let ISC DHCP users know that it is no longer supported, and we encourage them to migrate to Kea. We advertised the migration utility, and a related lease migration script, in a lightning talk at NANOG in 2023. The hosted version of the migration tool has been getting over 100 visits, and 50 - 100 configuration migrations, each week since the summer.\nOur team grew by one person, to a total of 10 for the DHCP, Stork, and QA projects. Piotrek Zadroga hit the ground running and is making valuable contributions to both Kea and Stork.\nOverall, 2023 was a great year for ISC\u0026rsquo;s DHCP team and our Kea, Stork, and ISC DHCP users and we look forward to continuing our work in 2024.\n","permalink":"/blogs/dhcp-accomplishments-2023/","tags":["Kea","DHCP"],"title":"DHCP Team Progress in 2023"},{"categories":null,"contents":"What does it mean to host an F-Root server? The Domain Name Service is a hierarchical system to resolve domain names into IP addresses. At the root of that hierarchy are 13 different “root letters” managed by 12 different organizations known as Root Server Operators. Each RSO operates one or more root server instances.\nTo “operate” a server means to be responsible for what it does. To “host” a server means to provide space, electrical power, internet access, and remote-hands support to the server operator.\nInternet Systems Consortium (ISC), isc.org, operates F-Root and hosts some of its servers. The F-Root servers not hosted by ISC are hosted by other organizations willing to provide the resources to support an F-Root server, in return for the benefits of having a root server on their premises, to provide highest-quality root name service to its customers or members.\nThis document describes the requirements, details, and procedures for hosting an F-Root node that will be operated by ISC on your premises.\nWho can host an F-Root? Any organization can host a root server if the root operator permits; there are no specific rules. ISC requires that F-Root servers be hosted in professionally-managed data centers or internet exchanges, which must have redundant electrical power, adequate security, cooling, and local hands. An organization hosting an F-Root server must contract not to tamper with it, not to interfere with its operation, and not to commercialize access to it.\nWho benefits from additional root servers? If the internet is running smoothly, a very small number of root servers can satisfy the root service need of the entire world. But there are always attacks, failures, misconfigurations, overloads, and other problems. Therefore the internet community, collectively, benefits from having a larger number of servers that use anycast routing. In addition to providing load balancing that makes sense in both a geographic and network-wise fashion, nycast also protects the whole service. One server can draw attack traffic while those in other regions and sites remain accessible to provide service.\nWhat is ISC’s current architecture for an F-Root server? We are now installing only single-box servers, which we call “F-Single” to differentiate them from older F-Root installations that have several servers and routers and switches making up the installation. The F-Single peers with your Route Server, which in turn peers with the networks that will use F-Root service.\nThe hardware is a 1U Dell PowerEdge R350 with an Enterprise iDRAC license, including a bootable SD-Card so that we can remotely reinstall and re-provision the operating system without the need for remote hands and significant amounts of RAM.\nThis system functions as both root server and router: it speaks BGP directly to all willing peers at a routing exchange, and will accept full customer routes from your route server.\nOrganizations that host F-Root servers are not given access to the device, so it doesn’t matter to the host organization what software is running there. It’s our job to maintain it, not yours. But there is no reason not to explain that we use the latest stable versions of FreeBSD, BIND, and BIRD as the basis for F-Root.\nWhat does it cost and what is the process? Review the Technical Requirements for Hosting an F-Root Node.\nComplete an application to supply all the information to ISC that we need.\nISC technical staff will review the information provided, and decide if you are an appropriate site for a new node. You will need to agree to a Memorandum of Understanding (MoU) that sets out the terms under which this F-Root server is operated, and describes the process by which either party can request that the arrangement be terminated and the server taken offline.\nOnce the MoU is signed, you must acquire the server hardware. We ask that you buy it locally in your country, so that it will be covered by the vendor’s extended warranty and service and will not be subject to import duty. If you buy a large number of machines from Dell and are subject to a discount of some sort, your discount will apply. Currently our recommended server configuration from Dell Computer costs about $3200 delivered. Prices vary from country to country, of course.\nWe work only with Dell server computers for F-Root because of their unique ability to be managed remotely. Our RAM and CPU requirements are somewhat high due to a need to handle the continued growth of both the root zone as well as the global internet routing table.\nYou buy the computer, install it in your data center, and provide power and three separate internet connections. One connection is for our access to the server’s remote management device (iDRAC), one connection is for general management access, and the third connection is to the Internet Exchange point, for BGP peering and F-Root service. When the server is installed, contact ISC and let us know. We will coordinate with you so that we can log-in remotely, load the operating system and the other software, and configure it. We can then announce availability of a new F-Root node!\nLinks Hosting an F-Root Server - Technical Information Hosting an F-Root Server - Hardware Requirements Hosting an F-Root Server - Application Form Hosting an F-Root Server - Memorandum of Understanding ","permalink":"/froot-process/","tags":null,"title":"Hosting an F-Root Node"},{"categories":["Company News"],"contents":"As always, this year has had its ups and downs, but 2023 has been a predominantly good year for ISC and we hope it has been for you as well. We wish everyone a wonderful holiday season and a happy, healthy year in 2024. Thank you for your ongoing support of open source software for an open Internet!\n","permalink":"/blogs/2023-happy-holidays-from-isc/","tags":["holidays","2023","ISC"],"title":"Happy holidays from ISC!"},{"categories":["Kea","Stork","Open Source","Events"],"contents":"Kea Configuration with Stork Stork is an open source project that provides a graphical interface for monitoring, and now also configuring, Kea DHCP servers. Besides providing a web interface, Stork also consolidates information from multiple Kea servers and provides a layer of administrative control, including integration with LDAP for administrator authentication and authorization.\nMarcin Siodelski, ISC Senior Engineer and developer on both the Kea and Stork projects, has been working on adding configuration capabilities to Stork. Marcin will demonstrate recent improvements in Stork that permit configuring subnets and pools (this requires the Kea Subnet Management hook). This is particularly helpful when configuring multiple subnets with similar settings, or when configuring the same subnets on multiple servers. After the demonstration, Marcin will take feedback and answer questions about Stork.\nWhen: Dec 13, 2023 03:00 PM Universal Time UTC\nPlease register in advance for this webinar at: https://us02web.zoom.us/webinar/register/WN__lIOYOVRTYGzHyyLITdXrg\nAfter registering, you will receive a confirmation email containing information about joining the webinar. Live participants will have the opportunity to provide feedback and suggestions for the Stork project, as well as to ask questions.\nThe recordings and slides of all of ISC\u0026rsquo;s webinars are posted after the event at https://www.isc.org/presentations/.\n","permalink":"/blogs/2023-12-13-stork-webinar/","tags":["Kea","Stork","Webinars","Events","Training"],"title":"December Webinar: Kea Configuration with Stork"},{"categories":["BIND","Performance"],"contents":"This article focuses on benchmarking resolver performance, using a methodology that aims to provide near-real-world performance results for resolvers. Our methodology has not changed significantly since our 2021 blog post, in which we compared BIND 9.16 performance to BIND 9.11. In this post, we will show that BIND 9.18 compares favorably with 9.16, using far less memory and significantly fewer CPU cycles to handle the same query loads.\nAbout Resolver Testing Cache State and Timing Matter Resolvers don\u0026rsquo;t know any DNS answers by themselves. They have to contact authoritative servers to obtain individual bits of information and then use them to assemble the final answer. Resolvers are built around the concept of DNS caching. The cache stores DNS records previously retrieved from authoritative servers. Individual records are stored in a cache up to the time limit specified by the authoritative server (Time To Live, or TTL). Caching greatly improves scalability.\nAny DNS query which can be fully answered from cache (a so-called \u0026ldquo;cache hit\u0026rdquo;) is answered blazingly fast from the DNS resolver\u0026rsquo;s memory. On the other hand, any DNS query which requires a round-trip to authoritative servers (a \u0026ldquo;cache miss\u0026rdquo;) is bound to be orders of magnitude slower. Moreover, cache miss queries consume more resources because the resolver has to keep the intermediate query state in its memory until all information arrives.\nThis very principle of the DNS resolver has significant implications for benchmarking: in theoretical terms, each DNS query potentially changes the state of the DNS resolver cache, depending on its timing. In other words, queries are not independent of each other. Any change to how (and when) we query the resolver can impact measurement results.\nIn more practical terms, this implies a list of variables that we have to replicate:\nA stream of test queries resulting in a realistic cache hit/miss rate. For this purpose, we have to replicate the exact queries and also their timing. Answers returned by authoritative servers, including TTL values. Network conditions between the resolver and authoritative servers (latency, packet loss, etc.). Cache size and other parameters affecting cache hit/miss ratio. The traditional approach implemented, e.g., in ISC\u0026rsquo;s Perflab or using the venerable resperf tool, cannot provide realistic results because it ignores most of these variables.\nThe second implication is that even the traditional QPS metric (queries answered per second) alone is too limited when evaluating resolver performance: it does not express the type of queries, answer sizes and TTLs, query timing, etc.\nOther performance-relevant variables include:\nThe protocol used between client and server (UDP, DNS-over-TLS, DNS-over-HTTP/2). DNS server setup. All of the \u0026ldquo;usual suspects\u0026rdquo; such as hardware, network driver, kernel versions, operating system configuration, firewall, etc. But these are not fundamentally different from benchmarking authoritative servers, so we will not delve into details.\nYou Can\u0026rsquo;t Simulate the Internet The long list of variables above makes it clear that preparing an isolated laboratory with a realistic test setup is very hard. In fact, ISC and other DNS vendors have learned that it\u0026rsquo;s impossible; realistic resolver benchmarking must be done on the live Internet.\nDevelopers from CZ.NIC Labs wrote a test tool called DNS Shotgun for this purpose. It replays DNS queries from traffic captures and simulates individual DNS clients, including their original query timing. The resolver under test then processes queries as usual, i.e., contacts authoritative servers on the Internet and sends answers back to the simulated clients. DNS Shotgun then receives and analyzes the answers.\nObviously, benchmarking on a live network cannot provide us with perfectly stable results. To counter that, we repeat each test several times and always take fresh measurements instead of using historical data. E.g., a comparative test of BIND versions 9.16.10 and 9.16.18 (which were released half a year apart) requires us to measure both versions again. This process ensures that half a year of changes on the Internet and our test system do not skew our comparison.\nFor each test run, we start with a new resolver instance with an empty cache. This way, we simulate the worst case of regular operation: it is as if the resolver was restarted and now has to rebuild its cache from ground zero.\nLet\u0026rsquo;s have a look at the variables we measure and how to interpret them.\nInterpreting Resolver Behavior The QPS metric alone is not particularly meaningful in the context of regular DNS resolver operation. Instead, we measure indications that resolver clients are getting timely answers, and resource consumption on the server.\na) CPU Utilization\nWe monitor time BIND processes spent using the CPU as reported by the Linux kernel Control Group version 2 metric usage_usec, and then normalize the value in a way which gives 100 % utilization = 1 fully utilized CPU. Our test machine has 16 cores, so its theoretical maximum is 1600 %. CPU usage is a cumulative metric and we plot a new data point every 0.1 seconds.\nb) Memory Usage\nSimilarly to CPU usage, we use the Linux kernel Control Group version 2 metric memory.current to monitor BIND 9\u0026rsquo;s memory consumption. It is documented as \u0026ldquo;the total amount of memory currently being used\u0026rdquo; and thus includes memory used by the kernel itself to support the named process, as well as network buffers used by BIND. Resolution of the resource monitoring data is 0.1 seconds, but the memory consumption metric is a point-in-time value, so hypothetical memory usage spikes shorter than 0.1 seconds would not show on our plots.\nc) Response latency - How quickly does the resolver respond?\nFinally, we arrive at the most useful but also the most convoluted metric: response latency, which directly affects user experience. Unfortunately, DNS latency is wildly non-linear: most answers will arrive within a split-millisecond range for all cache hits. Latency increases to a range of tens to hundreds of milliseconds for normal cache misses and reaches its maximum, in the range of seconds, for cache misses which force communication with very slow or broken authoritative servers.\nThis inherent nonlinearity also implies that the simplest tools from descriptive statistics do not provide informative results.\nTo deal with this complexity, the fine people from PowerDNS developed a logarithmic percentile histogram which visualizes response latency. It allows us to see things such as:\n95 % of queries were answered within 1 ms (cache hits) 99 % of queries were answered within 100 ms (typical cache misses) 99.5 % of queries were answered within 1000 ms (problematic cache misses) and so on.\nEven more importantly, a logarithmic percentile histogram allows us to compare the latency of various resolver setups visually.\nTest Methodology Data Set and Load Factor For realistic results, we need a realistic query data set. This article presents results measured using traffic captures (of course anonymized!) provided by one European telecommunications operator. We would really love any samples other operators could provide, as diversity in our sample data would make our testing more representative.\nThese traffic captures contain one hour of traffic directed to 10 independent DNS resolvers, all of them with roughly the same influx of queries. In practice, we have 10 PCAP files: the first with queries originally directed for resolver #1, the second with queries directed to resolver #2, etc.\nThese traffic captures define the basic \u0026ldquo;load unit\u0026rdquo; we use throughout this article: traffic directed to one server = load factor 1x. To simulate higher load on the resolver, we simultaneously replay traffic originally directed to N resolvers to our single resolver instance under test, thus increasing load N times. E.g., if we are testing a resolver under load factor 3x, we simultaneously replay traffic originally directed to resolvers #1, #2, and #3.\nThis definition of load factor allows us to avoid theoretical metrics like QPS and simulate realistic scenarios. For example, it allows us to test this scenario: \u0026ldquo;What performance will we get if nine out of 10 resolvers have an outage and the last resolver has to handle all the traffic?\u0026rdquo;1\nTest Design Here is the basic testbed setup we used to compare the BIND 9.18 series of releases to equivalent BIND 9.16 versions. We intentionally are not providing the exact hardware specifications to prevent readers from an undue generalization of results.\nWe use two servers: one to simulate (many) DNS clients using DNS Shotgun, and the other to run the DNS resolver under test. Each machine has 16 logical CPUs (eight physical cores with hyperthreading enabled) and 42 GB of operating memory. DNS Shotgun is configured to replay the original query stream (including timing) from one or more (original) telco resolvers to one machine running a resolver under test, with 2000 ms timeout on the client-side.2 BIND is configured with max-cache-size set to 30 gigabytes. Practically, all other values are left at default settings: the resolver is doing full recursion and DNSSEC validation. Also, the resolver has both IPv4 and IPv6 connectivity. The resolver and client machine always start from a completely clean state; most importantly, the resolver always starts with an empty cache. This approach allows us to measure the worst-case scenario, \u0026ldquo;how quickly will the resolver recover after a restart?\u0026rdquo; In practice, we inspect resolver behavior in the first 120 seconds, and expect service recovery within the first 60 seconds. Of course, 120 seconds is a short test compared to regular resolver uptime. We are focusing on the worst-case scenario, which is an empty cache. Depending on client behavior, the resolver can handle even more load after it has had more time to fill its cache. By starting in a clean state, we ensure that the performance levels described in this article can be safely reached without worrying about system restarts, complicated load balancing, etc. There is one point I cannot stress enough:\nIndividual test results like response rate, answer latency, maximum QPS, etc., are generally valid only for the specific combination of all test parameters, the input data set, and the specific point in time.\nIn other words, results obtained using this method are helpful ONLY for relative comparison between versions, configurations, etc., measured on the exact same setup with precisely the same data and time.\nFor example, a test indicates that a residential ISP setup with a resolver on a 16-core machine can handle 160 k QPS. It\u0026rsquo;s not correct to generalize this to another scenario and say, \u0026ldquo;a resolver on the same machine will handle a population of IoT devices with 160 k QPS on average\u0026rdquo; because it very much depends on the behavior of the clients. If all of our hypothetical IoT devices query every second for api.vendor.example.com AAAA, the resolver will surely handle the traffic because all queries cause a cache hit. On the other hand, if each device queries for a unique name every second, all queries will cause a cache miss and the throughput will be much lower. Even historical results for the very same setup are not necessarily comparable because \u0026ldquo;something\u0026rdquo; might have changed on the Internet.\nPlease allow me to repeat myself:\nThis test was designed to compare BIND 9.16 to BIND 9.18, handling a specific set of client queries at a specific point in time. Depending on the test parameters and your client population, your results could be completely different, which is why we recommend you test yourself if you can.\nBIND 9.16 and 9.18 Performance Comparison - Test Results We use the same resolver configuration and traffic to test both versions. We ran the test four times, increasing the load factor from our base load factor, to 5x, 10x and finally 15x, to show how the resolver performs under heavy load. For each test, we measure CPU utilization, memory usage, latency with a cold cache, and latency with a warm cache.\nTest 1, Base Load Factor From these charts, we can see that version 9.18.19:\nUses dramatically less memory than 9.16! This result is expected, because as we explained in an earlier blog, BIND 9.16 included two separate implementations of the networking code, and the older implementation was finally removed in 9.18. Running two versions of this code made BIND 9.16 use up extra memory. Consistently uses less CPU resources than 9.16. Overall, latency in 9.18 is as good or slightly better than 9.16 while using much less memory and somewhat less CPU resources. We have established that the resolver running BIND 9.18.19 is at least as performant as BIND 9.16 under ordinary loads. Let\u0026rsquo;s see what happens if we push harder and increase the load by a factor of 5:\nCurrently, we can see that BIND 9.18 uses half the memory, and consumes fewer CPU resources than 9.16 in the 5x load test. Latency in the warm cache test is about the same, slightly better for 9.18, but in the cold cache test, we see significantly fewer failed queries in 9.18 vs 9.16. (The intersection of the line with the top left quadrant of the chart show queries that remain unanswered after any reasonable client has timed out.)\nLet\u0026rsquo;s see what happens under load 10x:\nBIND 9.18 spikes to higher CPU utilization right at startup when it works hard to catch up with flood of traffic, doing more work in parallel on cache-miss queries than 9.16 was able to. After that initial spike it quickly settles down to lower utilization than 9.16. Memory usage is also initially higher in 9.18, but after about 8 seconds it drops to substantially less with 9.16 for the duration of the test. Notice how much narrower the shaded portion of the chart is around the 9.18 line than the 9.16 line on the memory usage chart. The narrower shading indicates far less variability in memory utilization in 9.18, an indicator of performance stability. At 10x load, 9.18 performs almost the same as 9.16 as far as latency, with fewer queries timing out in the cold cache scenario with 9.18 - this is the payoff of higher CPU utilization at the very beginning of the test.\nIn the final test scenario, with a 15x load factor, we expect to see more variability (wider shading around the lines) as BIND grows instable under heavy load.\nWith the 15x load factor, the resolver is practically overloaded. CPU utilization is pretty high, although after the first 5 seconds, it is again significantly lower with BIND 9.18. We see sightly wider variance bars on the memory utilization chart, with 9.18 again using less memory than BIND 9.16, after an initial spike. BIND has slightly better results from cache hits in the warm cache scenario, but probably not significantly different. From looking at the intersection of the lines at the top left of the chart, fewer queries are timing out in the cold cache scenario with BIND 9.18. The 9.18 resolver is able to better cope up with this overload situation and 3 % fewer queries time out than with 9.16.\nConclusions We have extensively tested BIND 9.18.19 resolver performance using traffic captures from a telecommunications operator. We conclude that this new version outperforms the resolver in BIND 9.16.44. In steady state BIND 9.18 uses much less memory, somewhat less CPU, and delivers answers to clients with smaller latency. At the same time BIND 9.18.19 has better parallelization and is able to cope better with overload.\nWe embarked on this benchmarking project because we had multiple anecdotal reports from users of performance regressions in the initial versions of BIND 9.16 resolver. Using the test method described above, we were able to confirm this regression in versions of BIND 9.16 prior to 9.16.19 and identify multiple issues introduced by the refactoring in that branch. By repeting the test between BIND 9.16.44 and BIND 9.18.19 we ensure that the same mistake will not affect our users who upgrade from BIND 9.16 branch to 9.18.\nNote that we are hard at work on creating BIND 9.20 now, and when BIND 9.20 is released, sometime in Q1 2024, users will have a quarter to update from BIND 9.16 before that branch goes EOL.\nTo simulate higher load factors, we slice and replay the traffic using the method described in this video presentation about DNS Shotgun around time 7:20. Most importantly, this method retains the original query timing and realistically simulates N-times more load. This method works under the assumption that the additional traffic we simulate behaves the same way as the traffic we already have. I.e., if you have 100,000 clients already, the assumption is that the next 100,000 will behave similarly. This assumption allows us to re-use slices of the original traffic capture from 10 resolvers to simulate the load on 20 resolvers.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe DNS Shotgun timeout of 2 s was selected to reflect a typical timeout on the client side. BIND uses an internal timeout of 10 s to resolve queries; the resolver continues resolving the query even after the client has given up. This extra time allows the resolver to find answers even with very broken authoritative setups and cache them. These answers are then available when the clients ask again.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/bind-resolver-performance-november-2023/","tags":["BIND","performance"],"title":"Comparative Resolver Performance Results of BIND Versions - November 2023"},{"categories":["Kea","Open Source","Stork"],"contents":"Stork 1.13 is the newest development release of our management tool for the Kea DHCP server. Stork is open source, licensed under the MPL 2.0 license. A single Stork server works with Stork agent probes deployed on each of your Kea servers to provide monitoring, including details of the software versions installed, CPU and memory usage, high-availability pairing and status, as well as detailed traffic and activity statistics on leases, DORAs, NAKed offers and so on.\nConfiguration Management With this release, Stork is really no longer just a dashboard and monitoring tool. Stork can now provide meaningful configuration control, over not just one Kea server, but your entire deployment of multiple Kea servers.\nSubnet Configuration Subnet configuration is one of the more complex, and error-prone areas of configuration, and one where there is a high likelihood that multiple Kea servers may be able to share configuration information. Most Kea users have configured their servers in a high-availability pair configuration, and a subnet which might be served by either Kea instance will need to be configured on both machines. This is why subnet configuration is the focus for Stork\u0026rsquo;s initial Kea configuration capabilities.\nSubnet View Stork provides a detailed list of all subnets configured in the managed Kea servers, with their local subnet identifiers. Different Kea servers may use different identifiers for the same subnet.\nUsage statistics by subnet are is shown in pie charts.\nExpandable rows display the values of the various DHCP parameters specified at different configuration levels (i.e., subnet, shared network, and global level).\nFinally, the subnet view contains DHCP options specified for the subnet at different levels (i.e., subnet, shared network, and global level). The checkbox allows for filtering to display only subnet-level options.\nSubnet Editing Stork 1.13.0 allows the administrator with appropriate permissions to modify an existing subnet if the DHCP servers associated with this subnet have the subnet_cmds hook library installed and loaded. Stork currently doesn\u0026rsquo;t support modifying the subnet pools.\nThe Assignments multi-select can be used to associate the subnet with new servers or remove a subnet from the existing servers.\nAll DHCP parameters except relay configuration can be modified in the form. Typically, the same values are specified and populated to all the DHCP servers. However, it is possible to unlock selected parameters, and specify different values for different servers. In the picture below, different allocators and client classes have been specified for the servers.\nSimilarly, different DHCP options can be specified for different servers.\nShared Network View The detailed shared network view begins with listing the servers including the shared network in their configuration. It also shows all subnets belonging to the shared network.\nShared network usage statistics are shown in pie charts. A bug fix included in this release has corrected some statistics that were previously inaccurate.\nNext, there are DHCP parameters configured for the shared network in different Kea servers. The expandable rows contain the values of these parameters specified at different configuration levels (i.e., shared network and global level). The example shown includes only one Kea server, but if there were more configured, they would be displayed in additional columns. This view allows the administrator to see at a glance whether different Kea servers have different settings for the shared network.\nFinally, the shared network view contains DHCP options specified for the shared network at different levels (i.e., shared network and global level). The checkbox allows for filtering only shared network-level options.\nLDAP Authentication for Stork Users The ability to link optional libraries has been a popular and successful feature for Kea. Stork now has its first hook library, to provide Stork user authentication via LDAP. This feature has been in development for several months, as it required implementation of some infrastructure for loading external libraries, but it is now ready for use.\nWe have based our LDAP integration on the go-ldap library. Because we didn\u0026rsquo;t have much in the way of detailed user requirements for LDAP, we had a lively discussion both on and off our open Stork GitLab repository about requirements, user preferences, and defaults. It is now possible to designate LDAP as the authentication method for some or all users, although an individual user must be configured to authenticate via one mechanism or the other.\nStork 1.13.0 includes support for reading the user group or role from LDAP and mapping that to a Stork user group for authorization purposes. For example, it is possible to limit access to Stork to users included in a specific group in the LDAP directory. The current release does not support creating or managing user roles in Stork from LDAP, so if you have a user group in LDAP that you want to assign Stork privileges to, you must create a corresponding user group in Stork to implement that. By default, Stork maps the stork-admin LDAP group to the admin role in Stork and the stork-super-admin group to the super-admin role in Stork. These groups may be overridden if needed.\nReferences For full details on the changes and new features, see the Stork 1.13 Release Notes. Download Stork 1.13 from this web site or install a DEB or RPM package from our package repository. Note that with this release we are now supporting ARM architectures! Please submit bug reports to our open Gitlab development system Ask questions or give us feedback on this tool on the stork-users@lists.isc.org mailing list. See the list information page to subscribe. There are screenshots, and a few design discussions that may interest users linked on the wiki pages of our Gitlab ","permalink":"/blogs/stork-1-13/","tags":["Kea","Stork"],"title":"Stork 1.13 Released"},{"categories":["BIND","Performance"],"contents":"We are only one quarter away from producing the next stable branch of BIND, and from ending maintenance of BIND 9.16 versions. This is a good time to look at the performance of BIND 9.18 vs 9.16 as many users have not yet adopted 9.18. The bottom line is, BIND authoritative server performance is remarkably stable, and users updating from 9.16 to 9.18 should not anticipate a performance impact. There are, however, a number of other changes to be aware of when moving from BIND 9.16 to 9.18, which are explained in this Knowledgebase article.\nAs part of the development cycle of BIND, we carry out a set of performance tests. Our performance laboratory system for DNS authoritative testing (somewhat unimaginatively called \u0026ldquo;perflab\u0026rdquo;) continually cycles through all configurations registered with it, building BIND and running performance tests. (Details of the system have been presented elsewhere), so they will not be described here.) In this way, as the development branches are updated with changes, the effect of those changes on performance is quickly visible. We use a completely different methodology for testing DNS resolver operations, because that is impacted by factors on the live Internet, and tests with canned queries are a poor predictor of real-world performance. Resolver performance results will be presented soon in a separate blog post.\nAmong the configurations tested, the following are especially significant:\nOne million zones: These tests simulate the environment of a web hosting provider, whose server might be serving zones for a number of customers. Each of the zones in the test is very small, comprising just an SOA, NS, A, and AAAA record. One million delegations: This configuration simulates a large top- (or second-level) domain, where the zone comprises almost exclusively delegations to the zones of subdomain owners. One million resource records: This is the opposite extreme to the one million delegations test. In that test, BIND returns referrals in response to queries; in this test BIND returns resource records from within the zone. Root zone: As some of the twelve root server operators run BIND on at least some of their servers, and given the crucial importance of the root zone to the global DNS infrastructure, we test the performance of BIND when serving a copy of the root zone. For each scenario, the query set is one that will result in about 5% NXDOMAIN responses (with the exception of the root zone test, where we would expect a much higher rate of NXDOMAINs).\nTo obtain a figure for the query rate that BIND can handle, perflab starts up the server under test. It then fires queries at the server for 30 seconds and calculates a query rate. This 30-second measurement is repeated 30 times and a figure for the query rate (and standard deviation) is calculated. The first measurement is discarded from calculations, to allow the server to settle down, i.e., for all data structures to be created and reach their final size. The chart above shows the latest value, but there was virtually no measurable variation in the runs for 1K and 1M zones and very little variation for the other scenarios over the trailing month.\nWe also have a new metric, startup time, which measures the number of seconds from starting BIND until it prints \u0026ldquo;running\u0026rdquo;. Here we see a slight drop in 9.18, but in absolute terms it is a drop of only a few seconds.\nWe hope that this information helps alleviate concerns about a performance regression from the list of considerations when planning an update to BIND 9.18.\n","permalink":"/blogs/bind-performance-september-2023/","tags":["BIND","performance"],"title":"Comparative Performance Results of BIND Versions in Authoritative Configurations - September 2023"},{"categories":["ISC","Open Source"],"contents":"It’s time once again to get to know one of our engineers!\nMichal Nowak, one of our quality-assurance staff members, joined ISC not long before the Covid-19 pandemic began. Although he wasn’t able to prevent a worldwide health crisis, he works hard to prevent problems in our BIND 9 software before it’s released. Along with the other QA engineers, Michal ensures that BIND 9 releases are delivered on time and at high quality.\nMichal regularly debugs unit tests, coordinates with Cloudsmith support for our BIND packages, improves our QA scripts, investigates and addresses system test failures, runs stress tests on different operating systems, fine-tunes our continuous integration (CI) processes, and more! So even though our customers and users may not interact with him on a daily basis, he is constantly working on your behalf.\nAfter studying computer networking and the Unix operating system at the Faculty of Information Technology of the Brno University of Technology in Czechia, Michal later attended a liberal arts college at the Charles University in Prague, focusing on Western philosophy, literary theory of Milan Kundera, and Bohumil Hrabal\u0026rsquo;s works of fiction. He’s a true Renaissance man!\nPrior to joining ISC, Michal worked as a QA engineer at Red Hat for five years and at SUSE for four years. In those organizations, he focused on general software maintenance updates and testing Docker and virtualization images of the Linux platform.\nWhen asked why he chooses to work on open source rather than commercial software, Michal says, “I feel strongly about contributing to the community. So far in my career, I have worked only for open source companies because their products are freely accessible to the public, benefiting the broader community and positively impacting users worldwide.”\nMichal is proud that ISC “brings together a team that keeps refactoring the old code to modern engineering standards.” He also admires how ISC’s engineers are “always ready to own their mistakes and correct them transparently.”\nHe adds, “The best part of working at ISC is the BIND 9 team\u0026rsquo;s decades-long experience with DNS protocols and the hardened real-world mission-critical implementation.”\nWe are fortunate to have Michal on our staff and know that our software is better thanks to his work.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers and users. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-nowak/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer – Michal Nowak!"},{"categories":null,"contents":" Thank you for your interest in ISC\u0026rsquo;s software and services. An ISC representative will respond to your message as soon as possible. We are a small team, so please allow 1-2 business days. You may also call us at +1 650 423 1300.\nIn the meantime, for more information about our BIND 9, Kea DHCP, or ISC DHCP support contracts, please visit ISC Support.\nIf you have a question about BIND 9, Kea DHCP, Stork, or ISC DHCP and you are not looking for a support contract, you might want to try asking the user community via one of the user mailing lists we host at https://www.isc.org/mailinglists.\nIf you are trying to report an issue with one of our software projects, please visit the appropriate site:\nFor BIND 9 issues: our BIND 9 GitLab (https://gitlab.isc.org/isc-projects/bind9/-/issues) For Kea issues: our Kea GitLab (https://gitlab.isc.org/isc-projects/kea/-/issues) For Stork issues: our Stork GitLab (https://gitlab.isc.org/isc-projects/stork/-/issues)) For ISC DHCP issues: our DHCP GitLab (https://gitlab.isc.org/isc-projects/dhcp/-/issues) If you think your issue could be a security vulnerability in one of our projects, please visit our Reporting Security Vulnerabilities page and follow the instructions there.\nISC\u0026rsquo;s sales team is located in California, in the United States. Our business hours are Monday through Friday, 9:00 AM to 5:00 PM, Pacific Time.\nThank you and we look forward to speaking with you soon.\n\u0026ndash;The ISC team\n","permalink":"/thankyou-contact/","tags":null,"title":"Thank you for contacting ISC!"},{"categories":["Kea","Open Source"],"contents":"Kea 2.4 is the newest stable branch of the Kea DHCP server, suitable for production deployment. If you have been waiting to deploy some of the new features previewed in our Kea 2.3 development branch, your wait is over. Kea 2.4 brings many new features developed during the Kea 2.3 development cycle to a production release. For full details of the new features, see the Kea 2.4.0 Release Notes.\nKea 2.0 is now EOL. If you are running Kea 2.0 or an older version, we recommend updating. Kea 2.2 will be supported until the release of Kea 2.6. See the ISC Software Support Policy for the Kea release schedule. We are still creating new stable branches annually, and supporting each until we have two newer stable branches. This helps us maintain an aggressive pace of new development. Although it may seem as if there is \u0026ldquo;nothing new\u0026rdquo; in DHCP, that is far from true: enterprise networks in particular are only getting more complex, and since DHCP is used as a control plane for devices, there is a steady stream of new feature requests. At some point, however, we may need to have a longer support lifetime, because users won\u0026rsquo;t want to update their DHCP servers even biennially. If any of our users think that time has come, we hope they raise the question on our Kea user mailing list.\nThere are some changes in this version that are not backward-compatible. Please consult our documentation on updating for changes that may impact your monitoring, package installation, and database schema.\nNew Lease Allocation Strategies Since its inception in 2011, Kea has been using an iterative allocation strategy, which means it iterates through the address space and allocates available addresses sequentially. This is a simple, fast, and easy-to-understand allocation strategy that works for most use cases. We now have two additional allocation options; random and Free-Leases-Queue (FLQ).\nThe random allocator is somewhat slower than the iterative allocator but provides great resistance against scanning attacks, in particular when vast address space is available, such as in IPv6. This allocators should help reduce collisions when two or more Kea servers are assigning addresses from the same or overlapping pools, such as when using a shared database lease backend.\nFLQ is a new allocation strategy. When the FLQ allocator is selected, Kea generates a list of all possible leases on startup and keeps it in memory; it uses this list to assign the first available lease from a pool. This allocator is suitable when DHCP servers are working with nearly depleted pools, and the time to find an available lease via other allocators becomes too long. This new allocator, however, slows down the server startup and reconfiguration, and it also uses more memory, so users may wish to experiment with it in their specific configuration. It can be used for address assignment in DHCPv4 and prefix delegation in DHCPv6.\nThe iterative allocator will remain the default, but the choice of an allocation strategy is now a configuration decision depending on the deployment requirements.\nBulk Leasequery The subscriber-only Leasequery hook has been extended to support Bulk Leasequery (BLQ) for DHCPv4 and DHCPv6. BLQ allows rebooting routers and switches to reacquire their lost state by querying the DHCP server. BLQ differs from the typical DHCP protocol in two fundamental ways: first, it uses TCP, and therefore is connection-oriented; second, it does not follow the \u0026ldquo;one query, one response\u0026rdquo; paradigm. The \u0026ldquo;bulk\u0026rdquo; in BLQ means that devices can send a single query and receive many (possibly millions) of answers. We have done extensive internal testing and the feature appears to be stable, but we advise caution with its use. We are eager to get feedback on this topic, especially with regards to interoperability with other vendors. The DHCPv4 BLQ is currently available for all backends (memfile, MySQL, PostgreSQL). The DHCPv6 BLQ is currently available for memfile, with some very limited functionality for MySQL and PostgreSQL backends.\nPackage Updates Our Cloudsmith repository for binary packages has become very popular, and we think the majority of our subscribers are now using that repository. Native Deb, RPM, and APK packages are available for Alpine 3.15, 3.16, 3.17; CentOS 7; Debian 10, 11, 12; Fedora 36, 37, 38; RHEL 8, 9; and Ubuntu 18.04, 20.04, 22.04. All packages are built for amd64 architecture. For details, see https://cloudsmith.io/~isc/repos/.\nKea\u0026rsquo;s official APK, Debian, and RPM packages have been restructured and made to follow a consistent packaging standard. Please see the new Installation From Cloudsmith Packages and Caveats When Upgrading Kea Packages sections in the Kea ARM for more details. Depending on how Kea was previously installed, upgrading to this release or later on Debian or Ubuntu systems could cause the DHCPv6 and/or the DDNS server, as well as the open source hooks, to be removed.\nFancier Client Options We have done a lot of work to extend support for those of you with complex device options, adding support for vivco sub-options with multiple different enterprise IDs, for example. The Discovery of Network-designated Resolvers (DNR) options have been implemented for both DHCPv4 and DHCPv6. This is an implementation of the draft-ietf-add-dnr-16 IETF Internet Draft, which is expected to be published as an RFC soon. The options allow configuration of DNS over various transports, such as TLS (DNS-over-TLS or DoT), HTTPS (DNS-over-HTTPS or DoH), and others.\nISC DHCP Work-alike Features We added affinity for already-released leases, \u0026ldquo;early allocation\u0026rdquo; of addresses during DHCPOFFER, and the ability to cap setting the DNS time-to-live (TTL) as a percentage of a lease lifetime. All of these were features of ISC DHCP that have been requested by Kea users. We have also added support for a new feature called \u0026ldquo;template classes,\u0026rdquo; which is similar to the \u0026ldquo;spawning classes\u0026rdquo; feature in ISC DHCP.\nConfiguration Management A number of changes make configuration management easier, including a configuration hash function that makes it easy to tell whether the configuration has changed, new commands for querying and updating host reservations, an audit trail in the forensic log to show changes made administratively, and a major update to Kea\u0026rsquo;s YANG and Sysrepo support.\nFor a complete list of new features, please check out the Kea 2.4.0 Release Notes.\nKea Hooks Basic Commercial End User License You may not have noticed that last year we updated the basic commercial license for the non-open source hooks. (The open source hooks remain licensed under MPL 2.0; this is unchanged.)\nThe Premium hooks package, purchased online without support, is now for smaller businesses and non-profits only. There are 12-month license subscription options for 1,000, 6,000, 15,000, and 30,000 active leases at prices starting at $549. We think it is fairer for large enterprises and service providers to pay more, but we also wanted to preserve a low-cost option for universities and other non-profits, so qualifying organizations can purchase the lowest-cost option for up to 30,000 active leases.\nLarger deployments can access the Premium and Subscriber hooks as a bundle, either without support at the Basic level, or with support at Bronze, Silver, or Gold levels. Our levels are:\nBasic - premium and subscriber hooks; advance security notifications (ASNs); no support Bronze - premium and subscriber hooks; ASNs; email support during business hours Silver - premium, subscriber, and the new RBAC hook; ASNs; 24x7 email and phone support with SLAs Gold - premium, subscriber, and the new RBAC hook; ASNs; 24x7 email and phone support with faster response times Our support prices are based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet.\nReferences\nKea ARM Kea Release Roadmap Kea Significant Features matrix Kea Hooks Basic Commercial End User License ","permalink":"/blogs/kea-2-4-0/","tags":["Kea"],"title":"Kea 2.4.0 Released"},{"categories":["ISC","DNS","Training"],"contents":"Four of us from ISC attended the NANOG 88 meeting in Seattle, Washington on June 12- 14th. NANOG is a great place to meet independent network operators from across the US, and as usual, we talked with a number of current and prospective BIND 9 and Kea DHCP users, as well as at least one IX hosting an F-root node.\nISC contributions to the event included a day-long DNS class, using DIG, delivered by Eddy Winstead. T. Marc Jones, who has joined the NANOG DEI committee, participated in a panel discussion on dealing with bias at the DEI Committee’s inaugural Embracing Equity event at the Seattle Spheres.\n","permalink":"/blogs/nanog88-notes/","tags":["training"],"title":"NANOG 88 - Seattle Trip Notes"},{"categories":["ISC","DNS","Training"],"contents":"Five of us from ISC attended the RIPE 86 meeting in Rotterdam, May 22 - 26, 2023. RIPE meetings bring together people working on IPv6, DNS, open source, peering, abuse, Internet measurement, and Internet policy. Highlights, from one biased perspective:\nThe weekend prior to the RIPE meeting ISC co-sponsored a DNS Hackathon. The hackathon had its own logo, t-shirt, and cookie (the stroopwaffle), and was well-attended by an energetic group. Tony Finch from ISC worked on a project that made significant progress towards a more efficient, easier-to-use way of scripting DNS analysis via RIPE nodes. In addition, Tony\u0026rsquo;s nsnotifyd was a critical component of the DNS-Out-of-Band Signaling (DNS-OOPS) project, which was able to trigger BGP route inserts and withdrawals based on Knot, BIND, and NSD authoritative server readiness signals.\nThe RIPE meeting kicked off with a plenary talk in which Vesna Manojlovic of RIPE gave an impassioned call to action for us all to contribute whatever we can to addressing the climate change emergency. She made a compelling case for not dismissing it as a problem for just the oil and gas industry, or any other industry, explaining that doesn\u0026rsquo;t relieve the rest of us of responsibility.\nBenno Overeinder and Robert Carolina stepped in at the last minute to replace Maarten Aertsen from NLNET Labs in a 3-part presentation with Bastiaan Goslings from RIPE on The EU Regulating (Open Source) Software: The Proposed Cyber Resilience Act and Product Liability Directive. Rob, who teaches cybersecurity to law students when he isn\u0026rsquo;t lawyering for ISC, just happened to have a slide deck handy that explained how product liability works and that he was able to deliver in a five-minute class. Rob described how the updated Product Liability Directive might mean that, for example, a defective car containing a component which incorporated a popular, general-purpose open source library might expose the original author of that open source library to liability. This change in EU liability law could have a massive impact on open source if it isn\u0026rsquo;t moderated before it goes into effect.\nPeter Thomassen gave an update on deSEC: Secure DNSSEC Hosting in which he said \u0026ldquo;deSEC is a non-profit doing the same thing as Let\u0026rsquo;s Encrypt, but for DNSSEC.\u0026rdquo; It\u0026rsquo;s worth checking out.\nTheodoros Fyllaridis reported on the number of Law Enforcement Agency (LEA) requests received by RIPE in his presentation, \u0026ldquo;How the RIPE NCC Handles LEA Requests\u0026rdquo;; see page 14 here. The transparency was welcome, but it was amusing to see that the vast majority of LEA requests were for information RIPE does not have, and these were mostly from French law enforcement.\nThere were several discussions about the hot topic of payments for carrying traffic, including a RIPE task team responding to the European Telecom Network Operators Association study supporting the European Declaration on Digital Rights and Principles (which had an open comment period that just ended in May). The ETNO study naturally recommended payments from hyperscalers to network operators.\nThe team behind the Internet.nl Standards Compliance Test Suite gave an overview of the toolset. This is not a new thing, but it is very popular and some readers who aren\u0026rsquo;t aware of it might be interested in setting up a local instance of the tool for their users.\nThere was a talk in the Connect working group about a tool (Amethyst) for analyzing the connection between geopolitics and the connectivity architecture (in central Asia), with very impressive and elaborate visualizations. This seems like it might somehow be useful for those people trying to analyze access to the DNS root, and places where better peering is needed.\nMatthijs Mekking, from the BIND team at ISC, gave a talk on testing BIND 9 and DNSSEC Multi-signer Models, in which he uncovered a number of ways that the multi-signer model breaks BIND\u0026rsquo;s assumptions. Matthijs was interrupted - just as he got to the slide entitled \u0026ldquo;Let\u0026rsquo;s Go Crazy\u0026rdquo; - by a loud fire alarm that sent everyone to the exits. There is interest in the multi-signer model to enable moving a signed zone from one provider to another without going insecure.\nOn the topic of open resolvers and the DNS4EU project, Geoff Huston, in Measuring Open Resolver Use in the EU, noted that the use of open resolvers has \u0026ldquo;declined sharply from a peak of 30% of users in early 2022 to 10% of users today,\u0026rdquo; quite a sharp change that he could not explain the reason for. It almost seems as if the public awareness generated by the DNS4EU project has sent users back to their access providers for DNS services. RIPE is organizing a DNS Resolver Task Force to document best practices. (What they need now are fewer participants, who are more committed to the work.)\nAnand Buddhdev, in his update on K-root, reported an incident in which one of their authoritative systems was serving zones with expired signatures. This was due to short signature validity relative to a long transfer chain and TTLs. I was kind of surprised this hasn\u0026rsquo;t happened to anyone else before (Anand is quite an expert operator) but at any rate, I learned what EDNS Expire is for.\nTony Finch gave a very entertaining lightning talk on Where Does My Computer Get the Time From? on the final day of RIPE. (Watch the video, it is funnier than the slides. Tony has great comedic timing.)\nAs always, the presentations were only half of the experience, and, sadly for those of you who missed the event, the in-person interactions were great. However, this is the first RIPE this blogger has attended that did not also have fantastic food, so there is that comforting thought for those of you who stayed home.\n","permalink":"/blogs/ripe86-notes/","tags":["training"],"title":"RIPE 86 - Rotterdam Trip Notes"},{"categories":null,"contents":"Kea Migration Assistant (KeaMA) tool The Kea Migration Assistant is a tool that will partially translate a working configuration for ISC DHCP to an equivalent configuration for Kea. It is not possible to automatically translate the entire configuration, so the result will require some manual fix-ups. This tool is available on-line, or you can download the source code, install a package, or run a Docker.\nWhy Migrate? ISC ceased maintaining ISC DHCP in 2022. The software may continue to work in your environment indefinitely, but at some point you will need to upgrade the operating system on the servers running dhcpd, and you may encounter problems. It is impossible to predict when this will happen, so it is important to develop a migration plan as soon as possible.\nThe Kea DHCP server is a completely new design, which benefited from some lessons learned from supporting ISC DHCP for nearly three decades.\nMany optional features are implemented as hook libraries, and the DHCPv4, DHCPv6, and Dynamic DNS applications are separate packages, so you need only install the software you plan to use. The extensive Kea API supports integration into your existing management systems and online reconfiguration. Components that are frequently modified, such as host reservations and subnets, can optionally be stored in a common off-the-shelf database, separate from the main Kea configuration file, using premium hooks. Kea supports a simpler high availability mode in place of the DHCPv4 failover draft implemented by ISC DHCP. The Kea HA mode works equally well for both DHCPv4 and DHCPv6. Kea is multi-threaded, and offers much higher performance than ISC DHCP on modern computers. There is an Administrative Reference Manual for Kea (ISC DHCP just had man pages). There is an open source monitoring and configuration tool, called Stork, for Kea users. Migration provides an opportunity to learn more about aspects of your network and your DHCP configuration that may not have been touched in a while. As a result, your configuration may become simpler and cleaner. Migration also gives you a chance to document the new configuration, something which can often get overlooked.\nPhoto by Chris Briggs on Unsplash\nPlanning Your Migration Ideally, the best time to undertake a migration is when you already have other changes going on, and it will be less disruptive to try something new. For example, if you have already planned major infrastructure changes, you are adding new office locations, switching to a new network server OS, or attempting to achieve new networked application security goals, any of those might present a good opportunity for migration.\nSince migration can be a daunting process, ISC recommends several small- to mid-sized migrations over time rather than one large migration. One way to get a quick idea of how hard it might be for you to migrate, is to try our hosted KeaMA tool for translating an ISC DHCP dhcpd.conf file to a Kea configuration file. This doesn\u0026rsquo;t require any committment and is easy - and the result will indicate how straightforward migrating that DHCP server will be.\nRecommended Steps for a Successful Migration Review the current network design and DHCP configuration file(s). Set up an experimental Kea server in a non-production environment to familiarize yourself with the software. Review the Kea documentation to decide whether you want to use any features implemented in hook libraries. Acquire any premium hook libraries you plan to use. If you have a very simple deployment, take a look at our template configuration for a single-site organization. Determine which segment of the network to migrate first. Frequently administrators will choose a network with fewer human users (e.g. a server segment) and fewer legacy devices, which may have non-standard requirements and behavior. Use the Kea Migration Assistant on-line or download and run Kea Migration Assistant locally to prepare a rough initial configuration file for the new server from the existing ISC DHCP configuration file. Review and modify the configuration file produced by the tool, paying particular attention to the configuration of backup or load-balancing partner services, and client classification and option configuration. The Kea Migration Assistant will add log messages for any sections of the ISC DHCP configuration it could not translate: review the linked descriptions of differences between the two applications from your KeaMA log messages. Load the resulting candidate configuration into a Kea instance and run the configuration checkers to ensure you have a valid Kea configuration. Test the resulting Kea configuration in a non-production environment, using the software and OS versions you plan to deploy. ISC\u0026rsquo;s perfdhcp tool may be helpful for generating simulated client traffic for testing. Schedule the cutover for an off-peak, lower-traffic time and notify users (if that is your process). Translate your current valid DHCP leases into the Kea lease file format using the KeaMA Lease tool. Load the resulting lease file into your target Kea server. Cutover by some combination of lowering the lease lifetimes for the devices that will be migrating, re-configuring your relays to relay to the new DHCP server, moving the subnets to be migrated to the new server, or entirely decommissioning the old server. Some more steps here (we don\u0026rsquo;t know everything!) Verification, documentation Rinse and repeat with other network segments Resources The best resource is often other users. We highly recommend asking for advice on the kea-users mailing list and/or the dhcp-users mailing list.\nKea Configuration Differences For a complete list of the known ISC DHCP configuration elements that do not translate cleanly to an equivalent Kea configuration, search in the Kea project repository for issues tagged with the \u0026ldquo;migration\u0026rdquo; label.\nThe major configuration areas that will likely require redesign are the failover or high-availability solution, client classification, and host reservations. Kea has an alternative to the DHCPv4 failover draft implemented in ISC DHCP: the Kea feature is called \u0026ldquo;High Availability,\u0026rdquo; and it works equally well for both DHCPv4 and DHCPv6. Kea does support client classification, but there is no equivalent for ISC DHCP\u0026rsquo;s hyper-flexible permit/deny scripting language. The option inheritance hierarchy in Kea is different than in ISC DHCP, and the configuration for custom vendor-specific options is also different. Kea has robust support for host reservations, but it is different from ISC DHCP\u0026rsquo;s. In ISC DHCP all host reservations are global; in Kea, host reservations are by default per-subnet, although global host reservations are also supported.\nThe following Knowledgebase articles may help with understanding Kea functionality compared with ISC DHCP\u0026rsquo;s older features:\nKea High Availability vs ISC DHCP Failover (comparison) Kea Configuration Introduction Understanding Client Classification in Kea Kea Logging for ISC DHCP Administrators Kea Migration Assistant - DHCP Configuration ISC has developed the Kea Migration Assistant (KeaMA) tool to make it easier for users to translate their configuration files from one format to the other. KeaMA is a branch of the legacy ISC DHCP server and is available in a separate public repository. It takes the existing local ISC DHCP configuration and outputs it as a Kea JSON configuration. As mentioned above, some configuration elements cannot be translated by machine, and will require hand-editing. These are noted in the logs, with links to issues in ISC\u0026rsquo;s GitLab that explain the issue. See this KB article on Migrating from ISC DHCP to Kea using the KeaMA tool.\nAdministrators need to run the tool once each for IPv6 and IPv4 configurations; KeaMA produces separate output files for each. KeaMA provides diagnostic messages when a direct translation is not available or possible, and provides a link to the related Kea GitLab issue.\nThe Kea Migration Assistant is included as part of the most recent ISC DHCP versions. It can be built from source or installed as a pre-compiled package from ISC\u0026rsquo;s package repository. There is also an experimental Docker file.\nKea Migration Assistant packages Kea Migration Assistant sources Kea Migration Assistant Dockerfile Kea Migration Assistant - Leases This experimental Python script takes an ISC DHCP lease file as input, and outputs the same leases in the Kea lease file format. The leases then need to be loaded into the target Kea server. Note that depending on how much time elapses during this process, some of the leases may expire during the migration period. The 2023 video below includes a demonstration of the lease migration tool (scroll to the end of the recording for the demo).\nKea Migration Assistant - leases repository Video tutorials ISC has conducted several webinars in an effort to help users migrating from ISC DHCP to Kea.\nMigrating to Kea from ISC DHCP - 2020 This video, the 6th in a 2020 series on using the Kea DHCP server, focuses on how to migrate to Kea from ISC DHCP. Carsten Strotmann explains how to build and use the KeaMA configuration migration tool.\nMigrating to Kea from ISC DHCP - 2023 This 2023 video focuses on the higher-level planning for a migration, and includes a demonstration of the lease migration tool.\nNANOG 88 Lightening talk/demo of the hosted Kea Migration Assistant\nCreating a Kea configuration from an ISC DHCP configuration - 2019 In this video, Alan Clegg covers using the KeaMA utility to create a Kea configuration from an ISC DHCP configuration.\nNANOG 76 talk on DHCP Migration to Kea\n","permalink":"/dhcp_migration/","tags":null,"title":"Migrating to Kea from ISC DHCP"},{"categories":["ISC","Open Source"],"contents":" Each quarter we like to profile one of our engineers, to let our customers and users get more familiar with some of the people they may interact with as they use our software.\nOur newest support staff member, Darren Ankney, joined us in January 2023 after more than 26 years at First Network Group. He started there as an end-user technical support specialist and became head of their Tech Support team, and then began working on their DHCPatriot network appliance in 2000.\nDarren’s extensive experience with DHCP was all learned on the job; he holds a degree in Economics and Management from Ohio Northern University, but has been working in IT his whole career. (He points out as a fun fact that Ada, Ohio – home to Ohio Northern – is where the footballs used in the NFL’s Super Bowl are made.)\nDarren is excited to join the ISC team because, \u0026ldquo;Open source software has given to me with both hands over my career. The Internet itself would not exist without open source software.\u0026rdquo; He is delighted for the opportunity to \u0026ldquo;pay it forward\u0026rdquo; to the community.\nCustomers and users are already benefiting from Darren’s many years of troubleshooting DHCP; in fact, he started responding to questions on the dhcp-users mailing list before he even officially started working here! He came to us with a good grasp of Kea’s functionality and configuration, but he says, \u0026ldquo;I found out pretty quickly that there were some areas of Kea that I was not experienced with. So far my time here at ISC has been a whirlwind of learning!\u0026rdquo;\nHe appreciates his ISC coworkers: \u0026ldquo;Everyone is so positive, so helpful, so skilled and knowledgeable. This is an amazing team!\u0026rdquo; He looks forward to deepening his familiarity with Kea and becoming more proficient with BIND as he works to support our customers.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-ankney/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Darren Ankney!"},{"categories":["Open Source","Compliance","Company News"],"contents":"ISC and NLnet Labs today sent a joint letter to the European Parliament committee working on the EU Cyber Resilience Act.1 What follows is the letter, and some additional explanatory text provided to the committee.2\nApril 19, 2023 Amsterdam, Washington DC\nTo: ITRE’s (shadow)rapporteurs on the Cyber Resilience Act\nA plea for fairness for non-profit developers of open source software\nDear Members of the European Parliament,\nWe appreciate and applaud the goals of policy makers in both Europe and the United States to focus greater attention on the relationship between the software supply chain and cybersecurity. Our purpose today is to highlight an issue of ongoing concern to many of us who develop open source software without a profit motive: how to apportion the proposed new regulatory and liability burdens among the various economic actors engaging in software distribution in a manner that is both fair and equitable. We write to you as two non-profit developers and maintainers of some of the most well-known and widely adopted open source internet infrastructure software, each without shareholders and recognized as charities in respectively the Netherlands and the US.\nParties involved in this complex content-based ecosystem, which is unlike anything else in industrial history, must be treated fairly - and be seen to be treated fairly - by policy makers. Without a fair allocation of burden, policy makers risk destroying the very open development and distribution system that enabled the creation and operation of the Internet they now seek to protect. Fairness demands that \u0026ldquo;Responsibility must be placed on the stakeholders most capable of taking action to prevent bad outcomes, not [..] on the open-source developer of a component that is integrated into a commercial product.\u0026rdquo; This quote, from the US Cyber Security Strategy, is fully consistent with the NLF and the Blue Guide. In contrast, the CRA moves away from the nuanced multifactor discussion of charitable activities in the Blue Guide and places the burden unconditionally on non-profit developers like us, merely because we seek to recover maintenance and development costs by providing charged-for technical support or consultancy services to businesses that implement or operate our software.\nWe ask you not to undermine this funding model that has allowed us to distribute secure and stable open source internet infrastructure software for decades without the intent to make a profit and to consider the following amendment and justification.\nRespectfully,\nInternet Systems Consortium, Inc.\nStichting NLnet Labs\nCyber Resilience Act and Product Liability Directive: an open plea for fairness by ISC and NLnet Labs (non-profit developers of open source internet infrastructure software)\nNew Legislative Framework (NLF) of the European Union\nThe New Legislative Framework normally imposes responsibilities on those who “make a product available” on the market. The Blue Guide clarifies that, “A product is made available on the market when supplied for distribution, consumption or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.”3 The Blue Guide goes on to tell us, “Commercial activity is understood as providing goods in a business related context.\u0026quot;4 We conclude that Europe, like the United States, wants to impose responsibility on those who are best placed to carry the burden - those who supply products in a business related context.\nNon-profit distribution of open source internet infrastructure software should not be viewed as a “commercial activity” in a “business related context”5 The Blue Guide recognises that determining the existence or nonexistence of a “commercial activity” can be complicated in the case of non-profit organisations.6 It gives a non-exhaustive list of factors to consider when determining whether a non-profit organisation is distributing a product “in the course of a commercial activity”: (i) regularity of supply; (ii) characteristics of the product; and (iii) intentions of the supplier. Going through these in turn, we note the following.\nRegularity of supply.\nFor multiple decades our organisations have been developing and giving away open source DNS software for free. Regularity of supply is the hallmark of a healthy open source project and is to be encouraged.\nCharacteristics of the product.\nOur software is not intended for consumer use. Our software is adopted, installed, configured, and managed overwhelmingly by sophisticated entities such as internet service providers and others who manage complex networks and incorporated by third parties into their commercial, for-profit, product and service offerings.\nIntentions of the supplier.\nOur organisations were established for the purpose of empowering others to grow and operate the internet. We build a series of tools - tools adopted and used by sophisticated technology experts - to maintain the Internet.\nThe text of CRA appears to deviate from the Blue Guide creating significant uncertainty about scope of application.\nAlthough we set out our observations above using the multiple factors provided by the Blue Book explaining why we believe our organisations and similarly situated entities should not be treated as supplying software as part of a “commercial activity” in a “business related context,” the text of the proposed laws under discussion seems to drag us away from that analysis.\nThe amendment to Recital 10 of the CRA recently proposed in the rapporteur’s draft report of March 31st would leave the text as follows:\n(10) In order not to hamper innovation or research, only free and open-source software supplied in the course of a commercial activity should be covered by this Regulation. In the context of software, a commercial activity might be characterized not only by charging a price for a product, but also by charging a price for technical support services… (emphasis added)\nWe appreciate that this language helps to emphasise existing Blue Guide commentary that a “commercial activity” can include circumstances where a product is given away for free.7 Unfortunately, this language also suggests (rather strongly) that this is the end of the analysis. It seems to say that “charging a price for technical support services” alone is determinative.\nThis is the point that concerns us.8\nDeveloping software that will be given away for free still requires funding.\nWriting secure and stable software requires professional, timely, and recurring engineering effort and oversight. History teaches that this cannot be accomplished long-term by the efforts of volunteers alone, no matter how talented. This, in turn, means that somebody must pay for the effort. By definition, open source software is not sold - it is given to the world for free. Similarly, maintenance is not sold - it is given away to the world for free. The challenge becomes how to secure financing without breaking the benefits of the open source model. Grant seeking activity alone is not sufficiently predictable. Those who provide grants are typically more interested in funding new features and less interested in funding routine maintenance activity that is critical to promote security and stability. Offering affiliated technical support or consultancy services to implementer/operators to cover the cost of continued maintenance and development has become a popular model to secure recurring revenue to cover the cost of continued maintenance and development.\nOur Plea For Fairness\nPlease do not jeopardise this non-profit funding model that has allowed us to fulfil our charitable mission of distributing secure and stable open source internet infrastructure software these many decades.\nThe EU Cyber Resilience Act is new proposed legislation from the EU Commission that aims to improve the cybersecurity of products containing software, by requiring the application of a \u0026ldquo;CE Mark\u0026rdquo; for software. Compliance requirements vary based on whether the product is classified as \u0026ldquo;critical,\u0026rdquo; which is a good thing. The open source that ISC and NLnet Labs publish (including BIND, Unbound, NSD, BIRD, Kea DHCP) appears to fall into the most heavily regulated category under the act. There is an exemption for open source, but open source that has any associated commercial activity (meaning any significant funding stream to sustain it) is explicitly included in the regulation. (The definition of a product, and associated commercial activity, as well as the classification scheme, are areas that could use more clarification.) For an excellent overview of the proposed legislation, take a look at Maarten Aertsen\u0026rsquo;s blog from November 2022. This letter is one of many to the committee members, see also this letter signed by a number of open source organizations.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe rapporteurs and shadow rapporteurs are: Nicola Danti, Henna Virkkunen, Beatrice Covassi, Ignazio Corrao, Ezven Tosenovsky, and Marc Botenga. For more ideas on what you can do, see this blog post by NLnet Labs\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe ‘Blue Guide’ on the implementation of EU product rules 2022. 2022/C 247/01.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nBlue Guide at Section 2.2, emphasis added.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nBlue Guide at Section 2.2, emphasis added.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n“Non-profit organisations may be considered as carrying out commercial activities if they operate in such a context. This can only be appreciated on a case by case basis taking into account the regularity of the supplies, the characteristics of the product, the intentions of the supplier, etc. In principle, occasional supplies by charities or hobbyists should not be considered as taking place in a business related context.” Blue Guide at Section 2.2, emphasis added.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n“A product is made available on the market when supplied for distribution, consumption or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge. ” Blue Guide at Section 2.2 (emphasis added).\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nWe have elaborated upon this concern and its effects in our submission to the Commission, specifically the (i) disincentive to professionalise development and curation; the (ii) incentive to move away from open source nonprofit model; and the (iii) risk of harming product diversity and reducing innovation.\n“The Cyber Resilience Act: Unintended harms to security and stability of Open Source Internet Infrastructure Software”, joint submission to the European Commission by NLNet Labs, CZ.NIC, Internet Systems Consortium, Inc., and The Network Device Education Foundation (January 23, 2023). Available at: https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13410-Cyber-resilience-act-new-cybersecurity-rules-for-digital-products-and-ancillary-services/F3376542_en\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/2023-cyber-resilience-act-letter/","tags":["ISC"],"title":"A Plea for Fairness for Non-profit Developers of Open Source Software"},{"categories":null,"contents":"All test results from the DNS Root Explorer app are automatically uploaded to ISC for subsequent analysis by any interested party.\nAll uploaded test results are public and can be viewed here.\nLocation Data If permitted by the user, the results include the user\u0026rsquo;s approximate location as determined by Apple\u0026rsquo;s location services to the nearest 1km. Precise location is determined (if possible) to show your location on the Map view but is not submitted.\nInternet Connection Data Limited Information about the user\u0026rsquo;s internet connection is gathered, including:\nthe user\u0026rsquo;s public IP address, anonymized* the name of the user\u0026rsquo;s network provider, and their Autonomous System Number (ASN) whether the tests were made over a cellular data connection, Wi-Fi or wired Ethernet * The user\u0026rsquo;s public IP address is rounded to the enclosing IPv4 /24 subnet or IPv6 /48 subnet where these subnet sizes are chosen to correspond with the smallest \u0026ldquo;globally routable subnet\u0026rdquo;.\nAnalytics Data The results also include:\nthe version number of the application the class of device used (Mac / iPad / iPhone) but not the specific model ","permalink":"/dns-root-explorer-application-privacy/","tags":null,"title":"DNS Root Explorer Application Privacy"},{"categories":["BIND"],"contents":"The first lines of code for BIND 9 were produced in 1998, which means that the original BIND 9 architecture is 25 years old! Happy birthday!\nIt\u0026rsquo;s been a long time, so let\u0026rsquo;s refresh our memory on the state of the world back then. In 1998, top-notch off-the-shelf server hardware was something akin to the Compaq Proliant 5500 model: up to four Intel Xeon CPUs with clock speeds around 500 MHz, up to 4 GB of RAM, a network interface speed of 100 Mbps, and a price tag well above $10,000 USD.\nIn 2023, for roughly the same money, it is entirely possible to build servers with 256 CPU threads, all steadily ticking at 2 GHz, complemented with outrageous amounts (read: terabytes) of RAM, and connected to a network using 10 Gbps or even 100 Gbps interfaces.\nComponent 1998 2023 CPU threads 4 256 CPU clock (MHz) 500 2000 RAM size (GiB) 4 1024 Network interface (Mbps) 100 10000 The operating system landscape has changed a lot as well! In 1998, the operating system market share was way more fragmented. AIX, HP-UX, Linux, NetWare, OS/2, Solaris, Unix in many other flavors, and Windows NT all had significant market share, with no clear winner. And each of them had its own set of bugs which needed workarounds. Last but not least, software engineering practices and computer science have also progressed quite a bit!\nTo sum it up, it\u0026rsquo;s clear that any software architecture designed in 1998 needs a serious overhaul to use the hardware available in 2023 - and that\u0026rsquo;s what we at ISC have been doing since 2017.\nTo give you a better idea about the scale of the changes, let\u0026rsquo;s imagine that this is what BIND 9.11 looked like:\nHere\u0026rsquo;s what we wish it looked like today, but it\u0026rsquo;s not quite there:\nAs you can imagine, the architectural differences between reality and fantasy are quite large. We all know how BIND 10 ended up, so rebuilding the software from scratch is not an option. But rebuilding it piece by piece is certainly possible \u0026hellip;\nBIND 9.16 In BIND 9.16, we replaced half of the old networking stack with the libuv library. In effect, the code for talking to DNS clients was brand new, and the part for talking to other servers was the old code from 1998. Essentially, we had something like this:\nPlease note the smoke still coming out of the front; both engines are running at the same time. It works - but certainly is not optimal. Obviously, half-implemented architectural changes can have negative side effects, too.\nBIND 9.18 The next step was to finish transition of the networking engine to use the libuv library whenever possible. This is essentially what BIND 9.18 is:\nPlease note there is no chimney at the front anymore - all the networking is powered by the new libuv code. But it still looks like a steam engine! Why is that?\nBecause all the old 1998 plumbing is still mostly where it was. For example, cache and zone data are stored in our own Red-Black Tree database implementation, which requires obtaining and releasing a lock every time a data access is made. This worked okay in 1998 with four CPU cores, but it\u0026rsquo;s not going to perform very well with 256 parallel accesses \u0026hellip;\nBIND 9.20 For BIND 9.20, we hope to rework lots of internal plumbing and replace the venerable Red-Black Tree database used for cache and zone data with a brand new data strucure, which will avoid the need to lock the whole thing every time we access it. With that change in place, BIND 9.20 will become something like this:\nIt\u0026rsquo;s still not a rocket, but much closer than it used to be!\nOutlook We will continue to modernize BIND 9 and chase the ideal, fast-moving target of contemporary engineering and computer science:\n","permalink":"/blogs/bind-architecture-25-years/","tags":["BIND"],"title":"25 years of BIND 9 architecture"},{"categories":["ISC","Open Source"],"contents":" Each quarter we like to profile one of our engineers, to acquaint our customers and users with some of the people they may interact with as they use our software.\nWe\u0026rsquo;ve already written profiles of some of our other engineers, but this time we thought it might be nice to go to the top of the BIND pyramid and introduce you to Ondřej Surý, ISC\u0026rsquo;s Director of DNS Engineering. He leads the team of software engineers working on BIND 9, in tandem with the BIND 9 quality assurance group.\nSince starting at ISC in September 2017, Ondřej and his engineering team have produced more than 200 incremental releases in all the branches of BIND 9. Under Ondřej\u0026rsquo;s leadership, the project has become much more transparent, moving to an open repository. The release process has also changed dramatically: we now do regular monthly development snapshots, whereas previously we sometimes spent more than a year working on a new branch before exposing it to users. Ondřej and his team have refactored huge parts of the BIND code, making named both smaller and faster, and introduced a slew of ISC-maintained packages for easier deployment. At the same time, he has modernized and expanded ISC\u0026rsquo;s BIND 9 QA efforts, adding regular fuzzing, performance testing, and numerous pre-commit quality checks. Ondřej enjoys knowing that the software is used around the world: \u0026ldquo;I am certain that my work is being used every day at multiple places, be it my DNS work my Debian packaging work (of which the PHP packaging is the most prominent), or my IETF standards work.”\nHe believes strongly in the value of open source: \u0026ldquo;Working on open source and free software is a form of activism. You can\u0026rsquo;t be an open source hacker and be indifferent; after all, your work changes people\u0026rsquo;s lives for the better.\u0026rdquo; He adds, \u0026ldquo;I\u0026rsquo;ve been an open source advocate for more than half of my life. In my 25-year-long career, I\u0026rsquo;ve only briefly worked on proprietary products; it was only slightly more fulfilling than a gig at a brewery where I made money by putting empty bottles into boxes.\u0026rdquo; What Ondřej likes about open source is \u0026ldquo;the transparency and the obligation to make things better: with open source software you make a commitment to improving the software for everyone.”\nBefore joining ISC, Ondřej worked for CZ.NIC, the Czech domain registry. He founded CZ.NIC Labs, where Knot DNS and Knot Resolver came into existence under his supervision, leading to some friendly competition with BIND 9. He holds a bachelor\u0026rsquo;s degree in psychology and sociology, which has surely come in handy throughout his vast experience in system administration, system programming, software engineering, and engineering management.\nOndřej truly appreciates his colleagues here at ISC: \u0026ldquo;I\u0026rsquo;ve learned so much new stuff because of them.\u0026rdquo; He also values ISC\u0026rsquo;s corporate culture, saying \u0026ldquo;I like the freedom I have here to make an impact for users of ISC\u0026rsquo;s software. I am committed to software development transparency - including our mistakes. I am not afraid to change things, and I can make decisions and take responsibility for the outcomes.”\nWhen asked if there\u0026rsquo;s anything else people should know about him, Ondřej replies with a quote from Richard Dawkins: \u0026ldquo;Some people define atheism as a positive conviction that there are no gods and agnosticism as allowing for the possibility, however slight. In this sense I am agnostic, as any scientist would be. But only in the same way I am agnostic about leprechauns and fairies. Other people define agnosticism as the belief that the existence of gods is as probable as their nonexistence. In this sense I am certainly not agnostic. Not only are gods unnecessary for explaining anything, they are overwhelmingly improbable. I rather like the phrase of a friend who calls himself a \u0026rsquo;tooth fairy agnostic\u0026rsquo; — his belief in gods is as strong as his belief in the tooth fairy. So is mine. We live our lives on the assumption that there are no gods, fairies, hobgoblins, ghosts, zombies, poltergeists or any supernatural entities. Actually, it is not at all clear what supernatural could even mean, other than something which science does not (yet) understand.\u0026rdquo;\nAlthough he may seem imposing if you\u0026rsquo;ve ever met him in person, Ondřej enjoys being outnumbered by his wife and three daughters at their home in Czechia.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-sury/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Ondřej Surý!"},{"categories":["Annual Reports","BIND","DNS"],"contents":"BIND 9 Development BIND 9.18 was released at the beginning of 2022. This new major branch completed the transition to the new networking code. For a full list of added, removed, and changed features, see the release notes for BIND 9.18, but below are a few highlights for your enjoyment.\nBIND 9.18 was declared to be an Extended Support Version at the end of 2022: we encourage our users to upgrade to the latest BIND 9.18 release.\nIn 2022, the BIND 9 team grew - Tony Finch (formerly of the University of Cambridge) started working for ISC.\nEncrypted Protocols BIND 9.18 added support for the new encrypted DNS protocols:\nDNS over TLS (RFC 7858) XFR over TLS (RFC 9013) DNS over HTTPS/2 (RFC 8484) TLS certificate management infrastructure functionality and corresponding configuration options were added to BIND. The networking code for DNS over TCP and DNS over TLS has been unified in the latest development releases, significantly reducing the amount of networking code.\nWe have had relatively little user feedback on encrypted DNS in general, as usage and interest have been low. This, coupled with the recent refactoring, means there is a somewhat increased risk of undiscovered bugs in this area.\nImproved Memory Management BIND 9.18 now natively supports the jemalloc memory allocator, which works better with BIND\u0026rsquo;s DNS workloads, bringing reduced memory fragmentation and better performance.\nFinishing the transition to the new networking code also halved the number of the threads needed to run named. All these changes allowed us to reduce BIND 9.18\u0026rsquo;s memory usage to BIND 9.11 levels, while improving both authoritative and recursive performance. See our earlier blog post on this topic for details and data.\nImproved Performance BIND 9.16 used the new networking code on the \u0026ldquo;server side\u0026rdquo; while still using the older networking code for outgoing DNS queries. Completing the conversion to the new networking code in BIND 9.18 improved recursive performance, particularly in cold-cache scenarios.\nBIND\u0026rsquo;s DNS name decompression and compression code were rewritten to be significantly faster. The improved decompression code is in BIND 9.18 and the new compression code will be in BIND 9.20. The new compression code can reduce the CPU time for outgoing zone transfers by 30%. Alongside the name compression improvements, we also rewrote the way BIND does case-insensitive DNS name comparisons to make it more efficient while reducing code duplication.\nSecurity In 2022, BIND 9.16 had six CVEs and BIND 9.18 had 10 CVEs. Details can be found in the BIND 9 Security Vulnerability Matrix in the ISC Knowledgebase. We changed our vulnerability scoring process slightly, dropping the temporal metric from our scoring guidelines, because we found it had not made the scores any more accurate or useful in the case of DNS.\nFuture Development The ISC team continues to work on refactoring the multithreading design. The ultimate goal is to keep as much data as possible local to the I/O event loop in each thread to reduce contention between the threads, and only share the data structures that have to be shared - like the zone database, cache, or address database (ADB). Everything else (networking) has strong affinity to the thread that initiated the request.\nMore major refactoring is happening \u0026ldquo;under the hood\u0026rdquo; in 2023. BIND\u0026rsquo;s underlying database for storing zone records and cache uses a modified version of a red-black tree. Tony Finch has been working to adapt a qp-trie (https://dotat.at/prog/qp/) optimized for storing DNS names in BIND, to replace the venerable red-black tree in upcoming releases. We expect it will reduce memory usage and reduce the need for multithreaded locking.\nWe plan to add PROXYv2 support in one of the upcoming development releases (currently for -S edition only). This should be of interest to operators employing sophisticated forwarding architectures. Some of the RFC 8914 Extended Errors were added in 2022, with a few more still to come.\nBIND 9 Quality Assurance - Michał Kępień ISC\u0026rsquo;s BIND 9 QA team grew by one full-time engineer and one part-time intern in 2023, for a total of four full-time staff and our intern.\nWe have revised our security incident handling and code-signing processes to bring outdated procedures up-to-date and simplify and automate operations where possible. See this blog post for the code-signing update. ISC released 34 BIND open source releases (plus the associated private -S versions) in 2022. Each of these had a corresponding Docker, Debian, Ubuntu, and Fedora version, making a total of 170 packages published. A few of the older versions also had MS Windows binaries as well. We are continuing to automate as many release-related tasks as possible because of the high volume of releases. The BIND 9 QA team triaged and investigated many bug reports, including a few that led to very long threads \u0026hellip; see this blog post on a jemalloc issue! We performed the usual tedious work on bringing the OS versions used in our CI up to date (this was often non-trivial!). This also included moving away from CentOS due to that project\u0026rsquo;s fate. We automated configuration grammar processing as part of a significant update to the BIND ARM, which added hyperlinks to every command and significantly overhauled coverage of DNSSEC. The idea for the hyperlinking, and most of the new introductory text and images, were contributed by Ron Aitchison, author of the popular reference work \u0026ldquo;DNS and BIND for Rocket Scientists.\u0026rdquo; The QA team prepared images for testing BIND 9 on FIPS-ready platforms (this is still a work-in-progress as far as actual testing goes - but we needed to lay some groundwork for that). We finished (for the time being) work on improving the resolver performance benchmarking environment (more reliable provisioning, more readable charts/results and the codebase is better organized and easier to maintain). The QA team added new checks for automatic detection of leaking memory that is not tracked by named\u0026rsquo;s internal memory contexts (https://gitlab.isc.org/isc-projects/bind9/-/issues/3054). We made improvements to ISC\u0026rsquo;s Perflab test tool (https://github.com/isc-projects/perflab/pulls?q=is%3Apr+author%3Apspacek). Petr Spaček gave a talk at the July 2022 DNS-OARC workshop on the Performance Effects of DNSSEC Validation. The upshot was that \u0026ldquo;DNSSEC validation has a negligible impact on server latency, bandwidth usage, CPU time consumption, and the number of OS sockets. The only area of our tests in which we saw a significant impact from DNSSEC validation was memory consumption, which increased by about 10% with validation.\u0026rdquo; Contributions to Community Projects In addition to their work for ISC, BIND 9 team members contributed to a number of community projects.\nOndřej Surý:\nOpenSSL 3.x: helped set up the PKCS#11 Provider: https://github.com/latchset/pkcs11-provider/graphs/contributors libuv: fixed libuv recvmmsg() with MUSL libc: https://github.com/libuv/libuv/pull/3419 Michał Kępień:\nWireshark: implemented displaying NSEC3 Next Hashed Owner Name field in Base32-encoded form: https://gitlab.com/wireshark/wireshark/-/merge_requests/620 Linux kernel: implemented a new ioctl for reading the contents of flash memory devices, to address the deficiencies in the previously available ioctls: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=095bb6e44eb17da2cf95dbde9c83b44664a493f5 Petr Špaček:\ndnsperf: implemented the ability to send arbitrary queries, fast: https://github.com/DNS-OARC/dnsperf/pull/201; implemented the option to NOT log various annoying high-frequency messages like \u0026ldquo;unexpected message ID\u0026rdquo;: https://github.com/DNS-OARC/dnsperf/pull/196 python-gitlab: improvements for retry logic - retries when a GitLab resource is locked: https://github.com/python-gitlab/python-gitlab/pull/2326 Tom Krizek:\nDNS Shotgun: made various improvements in charts and PCAP processing: https://gitlab.nic.cz/knot/shotgun/-/merge_requests?scope=all\u0026amp;state=merged\u0026amp;author_username=isc-tkrizek respdiff-rs: implemented a CLI interface to allow for future extensions: https://gitlab.nic.cz/knot/respdiff-rs/-/merge_requests/8 Michal Nowak:\nFixed software collections for Fedora 35+: https://github.com/sclorg/scl-utils/pull/43 Reported a broken libuv 1.43.0 tarball: https://github.com/libuv/libuv/issues/3401 Worked with the Mock community to implement Oracle Linux 9 support: https://github.com/rpm-software-management/mock/pull/963 Worked with the packager of experimental GCC RPMs to fix a packaging issue: https://github.com/jwakely/pkg-gcc-latest/issues/8 Artem Boldariev:\nCommented on the design for QUIC support in OpenSSL (https://github.com/openssl/openssl/pull/17185) Updated his MSI installer for GNU Emacs on Windows (https://github.com/arbv/emacs-msi-installer) 2022 Board or Committee Involvement Petr Špaček: Became a DNS-OARC Program Committee member. Continues sitting on IETF\u0026rsquo;s ICANN Technical Liaison Group (although the group is dormant and had literally zero requests in the past year) Made minor contributions to the IETF\u0026rsquo;s standardization of catalog zones Performed substantial work on RFC 9276: https://datatracker.ietf.org/doc/html/rfc9276 ","permalink":"/blogs/2022-bind9-retrospective/","tags":["BIND"],"title":"BIND 9 - 2022 Retrospective"},{"categories":["ISC","F-Root"],"contents":"We’ve had another good year with F-Root, with twelve new sites coming online, and two Classic sites getting upgraded to F-single configuration (see below for an explanation of the difference). Two further sites were decommissioned. We had hoped to upgrade more of the Classic sites, but negotiations with the site sponsors are in progress at most of those.\nOur provisioning systems are now pretty mature and we’ve deployed an internal Prometheus data collection system with a Grafana front end to get a better view of our DNS traffic, both for short term spikes and longer term trends.\nAs of the end of 2022, F-Root consists of:\n3 global sites (2021: 3) 73 “F-single” sites (2021: 58) 5 “Classic” sites (2021: 8) 259 additional nodes hosted by Cloudflare (2021: 219) A global site hosts two F-Root servers as well as F-Root management infrastructure, and advertises covering prefixes for our Anycast addresses into the global routing table. An F-single is our current configuration, and relies on a single 1U Dell server to provide both the F-Root DNS service and BGP Anycast routing. A Classic site comprises 2x F-Root servers, a serial console server, routers, and switches. The Classic sites are being gradually phased out and where possible replaced with an F-single node. Both F-single and Classic configurations use the BGP “NO_EXPORT” community to limit their geographical catchment areas.\nSite Changes Completed in 2022 New Sites\nCode Location Sponsor BER1 Berlin, DE BCIX CLT1 Charlotte, NC, US Ninja-IX DFW1 Dallas, TX, US Ninja-IX DLA1 Douala, CM ST Digital DPS1 Denpasar, ID Cloud Exchange FOR1 Fortaleza, BR NIC.BR LAS1 Las Vegas, NV, US Ninja-IX LUX1 Luxembourg, LU LU-CIX MGQ1 Mogadishu, SO Dalkom Somalia NNG1 Nanning, CN CNNIC PHX1 Phoenix, AZ, US Ninja-IX SMF1 Sacramento, CA, US Ninja-IX Upgraded Sites\nCode Location Sponsor MVD1 Montevideo, UY Antel SEL1 Seoul, KR KISA The full lists of sites run by ISC (as well as the other 11 Root Server Operators) can be seen at https://www.root-servers.org/\nRay Bellis, Director of DNS Operations ","permalink":"/blogs/2023-f-root-update/","tags":["F-Root"],"title":"The State of F-Root"},{"categories":["ISC","DNS","Training"],"contents":"ISC\u0026rsquo;s Eddy Winstead will be giving a one-day DNS Fundamentals course at the upcoming NANOG meeting in Atlanta.\n\u0026ldquo;Learn about the DNS via discussion and real-world, real-time, hands-on investigation. The DNS is a critical part of Internet functionality. It is quite an old protocol, with many, many modifications made over the years. We\u0026rsquo;ll talk about the original designs and current implementations while seeing all this in action using freely available tools, mostly Dig.\nThis class is free. Registration is required to attend.\u0026rdquo;\nISC will be at NANOG 87 in Atlanta in force. Find us at our sponsor table at Monday Lunch, or, to make an appointment, email us before NANOG at info@isc.org!\n","permalink":"/blogs/nanog87-dns-fundamentals/","tags":["Kea","training"],"title":"NANOG 87 - DNS Fundamentals"},{"categories":["BIND","DHCP"],"contents":"With the January, 2023 maintenance releases, ISC is now encouraging all our users to consider updating to the 9.18 branch, if they are running an earlier branch. We are also announcing availability of the first 9.18-S Edition, for qualified ISC professional support subscribers. We feel that the 9.18 branch is as stable, or in some ways superior to, the existing 9.16 branch.\nWhat does Extended Support Version (ESV) mean? The ESV designation means a BIND 9 branch is stable enough for long-term adoption. BIND 9.18 has reached that milestone, and we encourage users running older versions to update to 9.18 now. In the past, ISC did not designate every version as ESV, but with our current release model, every even-numbered branch will be designated as ESV approximately midway through its active development cycle. We deem a branch ready for the ESV designation when there is a consensus between our development and technical support teams that the release is widely deployed and stable enough for us to encourage adoption by our more conservative users.\nWhen we move a version into ESV status, we stop adding new features to it. We may still modify existing features, generally in response to bugs, but we try not to add new features. We might even defer bug fixes if we think there is a chance the fix might cause problems for some users.\nHow long will BIND 9.18 be supported? One to migrate to BIND 9.18 now is that it has a longer support period ahead. BIND 9.16 will be supported through the end of 2023 while BIND 9.18 support will extend two years beyond that, through the end of 2025.\nThrough the end of 2024, the 9.18 branch will receive regular maintenance releases with fixes for serious issues and security vulnerabilities; no new features will be introduced. Minor feature changes may happen, especially in the course of mitigating a defect, as long as they don\u0026rsquo;t introduce backward compatibility conflicts. The last year of maintenance will be limited to critical and security issues only, and there will be no regular maintenance releases.\nHow does BIND 9.18 compare to BIND 9.16 and 9.11? The BIND 9.18 branch is based on BIND 9.16, plus two years of additional development work. This development work included some very significant refactoring, which was already underway when we released BIND 9.16. Below are some of the major ways in which refactoring has updated BIND 9.18.\nBIND 9.18 completes the network refactoring work begun in 9.16 During 2020 and 2021 we refactored BIND’s networking interfaces to use the widely used Linux libuv library in place of the older BIND-specific networking code. BIND 9.16 uses the new libuv code to receive and process incoming DNS messages (from clients), but it still uses the older networking code for sending and processing outgoing DNS messages (to other servers). We completed the refactoring to use the new libuv code for both client and server communications in BIND 9.18.\nThe hybrid networking infrastructure caused some unexpected anomalies in 9.16. We have of course fixed those we identified, but there may be other subtle residual effects that are hard to pinpoint. Because the networking infrastructure is consistent in BIND 9.18, it is more stable. This was a substantial change in BIND, but one that should significantly improve long-term sustainability.\nMultiple listener model In BIND 9.11 and earlier versions there was a single “listener” that distributed the incoming work between idle threads. In BIND 9.16 and later there are multiple listeners, directing incoming traffic to threads. There are benefits to each model.\nTo explain this, it might be helpful to envision the passport control line at an airport: generally, everyone is either all in a single long line, watching and listening to hear which passport examiner is free; or there is an airport staff person directing people to line up in front of different passport examiner booths. In the former case, there is some delay between the time a passport examiner finishes with one customer and when the next customer steps up to the window. It is a little inefficient, because new work is never queued up until an examiner is free. In the latter case, the examiner is working continuously, but there is always a risk that you might end up stuck in a line that isn\u0026rsquo;t moving because your examiner took a break, or is dealing with someone who doesn\u0026rsquo;t have a required visa.\nThe older, single-line model was less efficient than the current multiple-listener model, but it also consumed less memory. The multiple-listener model is more efficient, reducing latency. However, there is a risk with the multiple-listener model that some work could get stuck on a thread that is overloaded. The multiple-listener model relies on the Linux kernel to queue up work for threads. It is now possible to disable the multiple listener and revert back to the old behavior, in rare cases where you sometimes have very busy threads (e.g. doing high-volume transfers). We still think the new model is better, of course, but it is a change to be aware of.\nBIND 9.18 uses jemalloc for memory management Another major change implemented in BIND 9.18 was the replacement of the legacy BIND memory allocator with the jemalloc memory allocator. The older memory allocator kept pools of memory for later reuse and would never free up the reserved memory. The jemalloc memory allocator is much better suited to the memory usage patterns that BIND 9 exhibits and is able to be both fast and memory-efficient.\nBIND 9.18 consumes less memory than 9.16 for the same application As a consequence of the network refactoring and the switch to jemalloc, BIND 9.18 uses significantly less memory than BIND 9.16, and is therefore preferable for that reason. See this recent blog on BIND Memory Management for more details.\nThe BIND Administrative Reference Manual (ARM) has been reorganized and some new material has been added, including diagrams; statements have been grouped into related categories with tags. This is part of an ongoing effort to provide more useful configuration guidance in the ARM, beyond command syntax. What are the feature differences between 9.18 and 9.16? In addition to refactoring, we also added a few new features in BIND 9.18, removed support for some older features, and updated our list of supported operating systems.\nEncrypted transport, with DoH and DoT BIND 9.18 is the first stable branch with support for two new encrypted transports, DNS over HTTPS (DoH) and DNS over TLS (DoT). We have included the ability to limit connections over either transport, to avoid overwhelming a mixed-use system with more cpu-intensive encrypted connections. The popular dig tool can send both DoT and DoH queries, although we do not yet support forwarding over either DoT or DoH.\nNeither of these new transports has become especially popular among BIND users as far as we can tell, but we do know some users are deploying zone transfers over TLS (XoT). Although we are proud of our implementations of these new transports, they should be considered experimental until there is more evidence of large, successful deployments. There are more details in the release notes and in the following blogs.\nSeptember 2021 March 2021 February 2021 Support for Catalog Zones schema v.2 (IETF standards-track) Catalog zones are meta-zones that contain the list of zones to be served. ISC originally released an experimental, proprietary version of this feature, and since then we have worked with others in the IETF to propose a standardized implementation. We have updated our implementation in BIND 9.18 to support this new v.2 schema. This is another feature that has not yet been widely deployed at scale, so we appreciate feedback from those who are using it.\nKnowledgebase article ARM Extended Errors IETF RFC 8914 established 25 new error codes to provide more details with negative responses (typically with SERVFAIL responses). We have been implementing these incrementally in named. BIND 9.18 includes support for the first three we have implemented: #3 (Stale Answer), #18 (Prohibited) and #19 (Stale NXDOMAIN Answer). Additional extended errors are being added in the current development branch.\nARM Updates to the Key and Signing Policy (KASP) utility This is the recommended, fully automated way to sign and maintain DNS zones. For most use cases users can simply use the built-in default policy, which applies up-to-date DNSSEC practices.\nAggressive Use of DNSSEC-Validated Cache (synth-from-dnssec, see RFC 8198) is now enabled by default again, after having been disabled in BIND 9.14.8 What features have been removed? Windows(tm) support has been ended named and all utilities, including dig, are no longer available for Windows(tm).\nAs announced in April 2021 on the bind-users mailing list and in our blog, support for BIND 9 on Windows has been deprecated. There is a list of the supported platforms in the BIND ARM and more detail in our Knowledgebase. There are a few other minor deprecations. Support for the map zone file format (masterfile-format map;) has been removed. Native PKCS#11 support has been removed and replaced with support for OpenSSL integration via the OpenSSL APIs. DSCP packet marking was non-operational and is being removed in a future version. Explicit definition of port numbers is also being deprecated from BIND; this has been available in all the operating systems we run on for more than a decade already. We track features we plan to obsolete on the BIND Significant Features matrix.\nThere are, of course, many more changes that may affect certain users and environments. We always recommend reviewing the cumulative release notes, or the Changes log that refers to the releases between the one you are migrating from and the version you are moving to. Our support team has compiled a list of changes to BIND statements that may help you identify changes you should make to update your configuration.\nReferences BIND release notes Changes to be aware of when moving from BIND 9.16 to 9.18 BIND Significant Features matrix ","permalink":"/blogs/2023-BIND-9.18-extended-support/","tags":["BIND","Open Source"],"title":"BIND 9.18 Branch Enters Extended Support"},{"categories":["Open Source","Company News","Compliance"],"contents":"Updates Since the open source community started mobilizing to respond to the Cyber Resilience Act, there have been a number of excellent blogs on the topic. (these are listed with more recent towards the top)\nBert Hubert has published two blogs now, \u0026lsquo;The EU\u0026rsquo;s new Cyber Resilience Act is about to tell us how to code\u0026rsquo; and \u0026lsquo;EU Cyber Resilience Act part two: Updates \u0026amp; Impracticalities\u0026rsquo;.\nThe Free Software Foundation Europe has proposed that the EU shift the responsibility and cost of compliance to those deploying open source, away from those developing it. fsfe comment\nMike Miklinkovic of the Eclipse Foundation focuses on the possible unintended consequences of extending the updated EU Product Liability Directive to open source software.\nThe Fossified Podcast had an excellent discussion on the topic of the CRA.\nSimon Phipps has compiled a summary of the public comments on the CRA regarding open source.\nThe OpenForum Europe has convened a working group to provide continuing feedback on the CRA, after submitting an initial public comment here.\nThere was a panel at the February, 2023 FOSDEM in Brussels on regulating software for the European market could impact FOSS. See the recording for the talks, and the Q\u0026amp;A, although the discussion was not particularly hard-hitting.\nFor an excellent overview of the proposed legislation, take a look at Maarten Aertsen\u0026rsquo;s blog from November 2022.\nWhat is the EU Cyber Resilience Act (CRA)? The EU Cyber Resilience Act is new proposed legislation from the EU Commission that aims to improve the cybersecurity of products containing software, by requiring the application of a \u0026ldquo;CE Mark\u0026rdquo; for software. Compliance requirements vary based on whether the product is classified as \u0026ldquo;critical,\u0026rdquo; which is a good thing, but as far as I can tell, the open source that ISC publishes appears to fall into the most heavily regulated category under the act. There is an exemption for open source, but open source that has any associated commercial activity (meaning any significant funding stream to sustain it) is explicitly included in the regulation. (The definition of a product, and associated commercial activity, as well as the classification scheme, are areas that could use more clarification.)\nFor an excellent overview of the proposed legislation, take a look at Maarten Aertsen\u0026rsquo;s blog from November 2022.\nWe have some comments on the CRA ISC, along with three other non-profit developers of open source infrastructure software\u0026ndash;NLnet Labs, The Network Device Education Foundation, and CZ.NIC\u0026ndash;have submitted a joint comment on the impending EU CRA, calling out some of our specific concerns about the legislation as it applies to Internet infrastructure. We invite you to read our joint response.\nMost of the objective requirements are reasonable best practices To be clear, we are very committed to working towards high standards of cyber security in our software. At ISC, we are actually already meeting most of the specific requirements of the CRA, as far as vulnerability reporting, patching, public notifications, etc. We have well-established and -documented policies and procedures for vulnerability reporting and, unfortunately, we have a lot of experience in this area. Some of the CRA requirements are fairly reasonable, and in line with generally accepted best practices.\nHowever, other requirements pose a problem for us, and others are completely unworkable for any open source project. For example, the CRA as currently drafted appears to prohibit our established practice of notifying the DNS root system operators, as well as some of our technical support subscribers and operating system packagers of our software, of security vulnerabilities before we make the vulnerability public. This advance notificiation is both a good thing for the security of the Internet, and helpful for ISC as a business practice, because it is a benefit for our supporters. Other requirements of the CRA, such as the prohibition against releasing any software with any known vulnerability, regardless of the timing of the report, or the severity of the vulnerability, are just impractical for a large project with frequent releases and many issues. The regulation refers to \u0026lsquo;putting a product on the EU market\u0026rsquo; which is hard to interpret in the context of an open source project with an open repository where the software is always available to anyone on the Internet. There are many places where the CRA as written is unclear, and critical terms are inadequately defined.\nHowever, even if we clarify or modify these requirements, compliance with this new regulation is not going to help ISC publish more secure software. My main gripe about the CRA is that it is adding burdens to open source publishers in the name of increasing cybersecurity, when the main cause of poor open source software security is lack of resources. Once the CRA takes effect, from my reading of the CRA, ISC will have to pay third-party auditors to audit our development, documentation, testing, and release processes, at indeterminate intervals. Not only do we have to pay for this scrutiny, it may also delay releases until the auditors are satisfied. If this were an optional or consultative engagement, that might possibly be helpful, but as a mandatory requirement, it is definitely going to impose additional overhead while adding little value.\nThe CRA may be a necessary and effective step towards improving the cybersecurity aspects of commercial products, particularly the large numbers of unmaintained Internet of Things (IOT) devices that can form dangerous botnets and are the Achilles heel of the tech industry. However, the CRA has explicitly included developers of free open source in the scope of this regulation. While I agree that open source developers have an ethical responsibility for the quality of their output, this regulatory burden is misplaced and unhelpful. I would like to propose some alternative suggestions for how the EU could more effectively support production and implementation of secure open source.\nWhat problems is the CRA addressing? The CRA lays out two main categories of cybersecurity issues, or risks, that it intends to address:\n\u0026ldquo;(1) a low level of cybersecurity, reflected by widespread vulnerabilities and the insufficient and inconsistent provision of security updates to address them, and (2) an insufficient understanding and access to information by users, preventing them from choosing products with adequate cybersecurity properties or using them in a secure manner.\u0026rdquo;\nThe Act goes on to propose remedies for these two problems. But what are these solutions?\n1) Improvements to software quality The remedy for the first problem, the presence of unmitigated known vulnerabilities in released software, consists mainly of a prohibition on putting any product on the market with known vulnerabilities. (Annex I, Section 1(2) prohibits the delivery of software with “known exploitable vulnerabilities.”) It is hard for me to read this without feeling insulted. Do these people not realize we are already working AS HARD AS WE CAN to prevent shipping insecure software? It appears they are trying to replace our expertise in triaging vulnerabilities and balancing the risks of security bugs with a blanket mandate. We do, in fact, try not to publish versions with known vulnerabilities, of course.\nThere are cases when we might, however, ship a version with a known or suspected vulnerability unpatched. Most often, this is because the vulnerability has just been reported and not yet been verified. Sometimes, the patch is not yet ready, and there are other things in the version that are urgently needed by our users. We have many vulnerabilities reported to us by bounty hunters that we judge to be very low severity, and we may prioritize fixing other more severe issues first. We have a \u0026lsquo;blackout\u0026rsquo; period over the end of the year holidays when we try not to publish vulnerabilities, because many of our users cannot update then. In some cases, more systemic vulnerabilities require coordination with other developers in the DNS community and we cannot just rush out our patch if others are not ready. There are nuances to consider that the CRA does not account for.\nNot shipping software with known vulnerabilities is not nearly as hard if you don\u0026rsquo;t also actively look for vulnerabilities. To make a halfway-decent stab at this, you need at least:\na quality testing process that includes fuzzing, static analysis, system testing, and unit tests to find bugs before release. (of course if you don\u0026rsquo;t look for vulnerabilities, then there are none to disclose\u0026hellip;) a well-advertised mechanism for external researchers and users to report security bugs. (Having a broad and sophisticated user base deploying and observing the software in multiple different environments and applications is extremely helpful for vulnerability discovery.) resources for triaging and reproducing reported security issues promptly and thoroughly. adequate resources and knowledge to produce, verify, and release a fix or mitigation for the vulnerability. a design and code-review process that screens new code for potential issues. ongoing creation of new regression tests to prevent recurrence of old defects. All of these critical functions require highly skilled, committed, and available engineering staff, as well as the supporting infrastructure and tooling they need to be productive. The CRA presumes that the only barrier to shipping secure software is a lack of organizational will, which it mandates via regulation. This might be an issue with some commercial products, but a lack of will on the part of its maintainers is hardly the biggest obstacle to improving the security of open source in general. Most open source projects are not profit-generating, so software security risks are not traded off versus revenue goals. Gaps in security processes in open source are much more likely to be due to practical issues, like resource constraints or lack of knowledge, than any lack of motivation.\nOne big difference between open source and closed source is, there are a lot more security researchers looking for vulnerabilities in the open source. The CRA deals only with known vulnerabilities, but there are lot more unknown, undiscovered vulnerabilities in the less-examined closed source.\nCompliance regulation is the wrong approach The CRA does nothing to help any organization struggling with the real task of producing secure open source. Producing quality open source software requires resources, and nothing in the CRA helps to make those resources available to non-profit open source publishers. In fact, compliance with the Act will only add burdens on open source developers that will effectively reduce the resources available for improving quality and software security.\nThe Act makes a point of calling out open source software as a different category, exempting open source unless it is distributed in association with some commercial activity, and explicitly including technical support as a commercial activity. However, the authors did not consider the specific challenges and opportunities in developing and releasing secure open source, which could be leveraged to improve cyber security.\nI am as wary as the next person of the behemoths in the tech world, but the fact is, some of the most useful programs for actually supporting open source come from them. Google has their OSS Fuzzing project, and Amazon and Fastly have programs to provide free or discounted services for open source. Github has continuous vulnerability scanning and notification for software on their platform. Many open source projects use the free static analysis provided for open source projects for years by Coverity, now Synopsys. Are these crumbs from the tables of the giants? Maybe so, but the point is, they are highly leveraged and objectively useful in improving the quality and security of open source software. ISC\u0026rsquo;s development infrastructure is almost entirely based on open source, including the Mattermost chat application and the GitLab version control system, both of which are sustained by their commercial operations. These are all helpful resources that, in one way or another, help us to produce more usable, better-quality, and more secure open source. The CRA as a compliance-focused regulation would be more effective in addressing cyber resilience if it were accompanied by some programs like these, to provide practical assistance to open source projects, most of which are struggling to sustain themselves.\n2) Access to information The second major problem identified by the CRA is education, an \u0026ldquo;insufficient understanding and access to information by users.\u0026rdquo; The CRA includes requirements for documentation of secure deployment practices and secure default configurations, which are perfectly good recommendations. The CRA also includes some confusing language about mandated exploit and vulnerability reporting, which will probably be clarified in the future. The CRA does mention that developers should track and report on the software components included in products, which is undeniably important for operators to be able to tell what vulnerabilities they may be exposed to, (but the industry doesn\u0026rsquo;t yet have a fully specified standard for how to do this). However, when it comes to open source, considering we are only discussing known vulnerabilities in the CRA, it hardly seems likely that the biggest obstacle for our users is lack of knowledge about the existence of these vulnerabilities.\nThe proposed regulation does not address the more serious problem, of the slow application of available security fixes\u0026ndash;and lax software maintenance in general\u0026ndash;in production deployments. Researchers have reported that even some very well-publicized vulnerabilities, such as the Log4js vulnerability, or even Heartbleed, went un-patched for far too long in production deployments.\nHere again, the reality is that keeping up with the flood of vulnerability reports and resulting patches is simply a lot of hard work, and many organizations don\u0026rsquo;t have the resources or the skills to do it. Without a doubt, this is one of the main drivers of the popular adoption of cloud-based services, which shift the system maintenance burden onto an external vendor. The CRA is not wrong to require that product vendors provide decent product documentation and notify users of known severe defects, but this alone is insufficient to solve the problem of the many unpatched systems out on the Internet. In fact, to the extent that the CRA regulations may discourage the production of pre-compiled binaries for their users (because those would be additional products subject to compliance regulation), it may make it harder for users to keep their systems updated.\nThe EU could attempt to lower the burden on systems maintainers, by for example funding an automated, machine-readable subscription-based service for vulnerability notifications and patches. Collecting information from EU organizations on what open source they have deployed would provide some valuable data on where the biggest exposure might be: because open source software is freely distributed and downloaded, nobody has good data on how widely deployed open source packages are.\nThe EU could maintain a repository of links to \u0026lsquo;approved\u0026rsquo; open source versions. It could sponsor research and production of easy-to-apply guidance on systems security. It might even be useful to maintain a central directory of open source publishers to help identify downstream suppliers. There is a requirement to provide SBOMS in the CRA. Software Bills of Materials (SBOMs) may eventually help implementers discover vulnerable products in their networks, but the standards and technology aren\u0026rsquo;t quite mature enough yet for mandatory compliance requirements to be helpful. The industry could use some focus on nailing down a fully specified standard as soon as possible, so this information can be made available universally in a machine-readable format; some free open source tools to help implementers identify vulnerable products in their network would also be useful.\nIf the approach has to involve compliance and regulation, the EU could consider imposing some requirements on the organizations who deploy open source or who reuse it in commercial products, instead of adding to the burden on open source producers. These could include requirements for implementers to support the open source they use, to patch known severe vulnerabilities, or to run only actively maintained versions. Perhaps these requirements could be tied to access to cybersecurity insurance or some best practices rating system, to help nudge management into allocating resources towards system maintenance.\nSummary Nothing in this regulation will improve the cybersecurity of open source. Some provisions will waste scarce resources, and in a few cases, the regulations may conflict with our established best practices. However, the bigger question is, what could the European Union do that would help improve cyber-resilience in general, and open source software security in particular?\nSponsor or provide highly leveraged services or open source tools and infrastructure for assuring software quality. These could include security audits; fuzzing, scanning, and analysis tools; or hosting or computing resources.\nProvide funding for ongoing maintenance of important open source software, or encourage and incentivize the implementers and users of open source to contribute appropriately to the common goods they are benefiting from. There is funding in the EU for open source projects, but of all the open source development grant offerings I have seen, exactly zero of these aims to fund maintenance or quality operations for established projects.\nProvide guidance for businesses and consumers about cyber-resilience best practices, based on peer-reviewed research in the field. ISC\u0026rsquo;s BIND 9 users can, for example, consult the NIST Secure Domain Name System (DNS) Deployment Guide and associated Secure Technical Implementation Guidelines (STIG) checklists. Per the CRA, ISC would have to provide this advice ourselves, and to be honest, we are probably not the experts in this area.\nProvide an optional self-certification process that characterizes the software vendors\u0026rsquo; practices with regard to security vulnerabilities in a way that is both transparent and easy for users to consume. An example of such a program is the Linux Foundation\u0026rsquo;s OpenSSF Best Practices Badge. One way to incentivize implementers to select software that follows best practices with regard to cybersecurity might be with underwriting for cyber insurance that provides cost incentives for best practices that reduce risk.\nResearch how best to assist implementers in reducing the burden of ongoing system maintenance, and invest in those solutions. These will likely include some sort of system for monitoring standardized SBOMs and a subscription-based vulnerability notification system. A vulnerability notification system that permitted or even required enterprises to report the open source they are using, in exchange for getting prompt notifications of vulnerability would also provide the EU with a census of open source usage in the community.\nConsult with the open source community in developing a plan to regulate it. Perhaps this should have been my first suggestion?\nWhat happens next? The regulation will take effect in 24 months. Between now and then, there is an extensive process of developing, in each EU member country, the local regulations to implement the Act, including any \u0026ldquo;harmonized\u0026rdquo; standards that might apply. The Commission will also have to solicit and train an army of auditors to run the compliance process. There should be opportunities to engage with the regulatory authorities in each EU country over the coming year, and we hope open source users as well as developers participate in this process. The EU Commission working on this regulation has been engaging with the public in multiple fora to explain the regulation and has been open to comments on the impact of the regulation. We are pleased to see that the upcoming FOSDEM conference will include a panel discussion on the EU Cyber Resilience Act that will include several of the Act\u0026rsquo;s authors, as well as a talk on the related Product Liability Directive.\n","permalink":"/blogs/2023-cyber-resilience-act-comment/","tags":["ISC"],"title":"ISC and the EU Cyber Resilience Act"},{"categories":["ISC","Kea","Webinars","Training"],"contents":"ISC is delighted to announce its spring 2023 webinar series, focused on Kea DHCP. This series of five webinars will tackle some current topics of interest for DHCP administrators. Once again, we have Carsten Strotmann from sys4 as our trainer and presenter.\nRegister once for the entire series: all sessions will be recorded and the recordings posted here within a few days, so you can easily review any sessions you miss. Each session will last approximately one hour, including Q\u0026amp;A as time permits. Most of these sessions will include an optional hands-on component, using cloud-hosted VMs. Participants in the hands-on portion should allow an extra half-hour to complete the exercise. The hands-on lab is only available during the live session.\nSchedule 23 Feb - Using Stork to manage Kea 30 Mar - Configuring vendor options in Kea 20 Apr - Netbox and Kea DHCP 16 May - Migrating to Kea from ISC DHCP 07 Jun - Using the new dynamic templates in Kea Please note the first three sessions are on Thursdays, but the last two sessions are on other weekdays. There is also a Daylight Savings Time disconnect between the US and Europe; please check your registration from Zoom for the correct time in your timezone.\nPlease register at: https://us02web.zoom.us/webinar/register/WN_3ThmsX6WRJKU7PFr01ybwg. And please subscribe to our training-info mailing list to be notified of other upcoming ISC webinars!\nWe look forward to seeing you online.\n","permalink":"/blogs/kea-webinar-series-2023/","tags":["Kea","webinars","training"],"title":"Webinar Series - Using the Kea DHCP Server"},{"categories":["BIND","DHCP"],"contents":"The Basics The basic BIND 9 memory management object is a memory context: the application can have as many as is practical. There are two reasons for a separate memory context: a) logical separation - this includes both separate accounting, and different configuration, and b) contention and speed - access to a memory context pinned on a specific thread will not be blocked by different threads.\nLimiting memory use The configuration option max-cache-size only affects the memory context in the cache and ADB (address database). All other memory contexts are unconstrained. This means setting the max-cache-size to 100% would lead to the OOM Reaper finding your BIND 9 process and killing it.\nBIND 9.16 uses more memory than BIND 9.11 There are two reasons for this:\nThe networking model has changed. In BIND 9.11 there was a single \u0026ldquo;listener\u0026rdquo; that distributed the incoming work between idle threads. This simpler model was slower than the current multiple listener model, but because there was a single listener, it also consumed less memory. During 2020 and 2021 we refactored BIND\u0026rsquo;s networking interfaces to use the widely-used Linux libuv library in place of the older BIND-specific networking code. BIND 9.16 uses a hybrid of the new and old networking infrastructure. BIND 9.16 uses the new libuv code to receive and process incoming DNS messages (from clients), but it still uses the older networking code for sending and processing outgoing DNS messages (to other servers). This means it needs to run twice as many threads - there is a separate threadpool of workers for client and for server-side communications. BIND 9.18 uses less memory than BIND 9.16 BIND 9.18 uses less memory than 9.16, similar to the memory usage in 9.11. The part that sends and processes outgoing DNS messages (server side) was refactored to use the new networking code and therefore uses half as many threads as BIND 9.16 used.\nThe other major change implemented in BIND 9.18 was the replacement of the internal memory allocator with the jemalloc memory allocator. The internal memory allocator kept pools of memory for later reuse and would never free up the reserved memory. The jemalloc memory allocator is much better suited to the memory usage patterns that BIND 9 exhibits and is able to be both fast and memory efficient.\nOur general recommendation for all deployments is to use jemalloc if possible. You can use jemalloc with BIND 9.16 by forcing the linkage via extra LDFLAGS (./configure LDFLAGS=\u0026quot;-ljemalloc\u0026quot; should do the trick).\nMeasuring Memory Measuring real memory usage can be tricky, but fortunately, there are some tools to help with that.\nMeasuring Memory Internally The statistics channel exposes counters for memory contexts. The important values are \u0026lsquo;InUse\u0026rsquo; and \u0026lsquo;Malloced\u0026rsquo;. The \u0026lsquo;InUse\u0026rsquo; counter shows the memory used \u0026ldquo;externally\u0026rdquo; and \u0026lsquo;Malloced\u0026rsquo; includes the management overhead (the more memory contexts the more overhead there is).\nYou can use this attached memory-json.py script to parse the statistics channel output to receive following data (this is from main branch):\nOpenSSL: 268.8KiB 277.0KiB uv: 6.1KiB 14.3KiB libxml2: 1.0KiB 9.2KiB \u0026lt;unknown\u0026gt;: 9.2KiB 17.4KiB main: 1.4MiB 1.5MiB loop: 10.8MiB 10.8MiB zonemgr-mctxpoo: 20.5KiB 86.1KiB clientmgr: 768.0B 66.4KiB cache: 31.9KiB 48.3KiB cache_heap: 2.1KiB 18.5KiB ADB: 525.7KiB 542.1KiB SUMMARY INUSE: 13.1MiB == 13.1MiB MALLOCED: 13.3MiB == 13.3MiB Measuring Memory Externally The rule of thumb is \u0026ldquo;Don\u0026rsquo;t use the \u0026rsquo;top\u0026rsquo; command\u0026rdquo; - there are better tools that are less misleading. There are two tools that are easily available on modern Linux systems - pmap and smem.\npmap pmap provides detailed statistics, but can be too chatty - the basic usage is pmap -x -p \u0026lt;pid\u0026gt;. It prints information about all pages used by the program which includes shared libraries, the program itself, and the heap. The important number is the last one, \u0026ldquo;Dirty\u0026rdquo; - it shows the memory \u0026ldquo;used\u0026rdquo; by BIND 9.\nExample pmap output could look like this:\n$ pmap -x -p $(pidof named) 3301879: /usr/sbin/named -4 -g -c named.conf Address Kbytes RSS Dirty Mode Mapping 000055872b587000 88 88 0 r---- /usr/sbin/named [...too many lines...] 00007ffc52753000 132 40 40 rw--- [ stack ] 00007ffc527c1000 16 0 0 r---- [ anon ] 00007ffc527c5000 8 4 0 r-x-- [ anon ] ---------------- ------- ------- ------- total kB 760180 74324 60708 smem smem provides fewer details, so if you want only a single number, run smem -P named and look for the USS column - this provides the information about memory used by the program sans the shared library. The PSS column adds shared libraries divided by the number of programs using those libraries, and RSS is the normal Resident Size.\n$ smem -P named -a PID User Command Swap USS PSS RSS 3301879 ondrej /usr/sbin/named -4 -g -c named.conf 0 69664 70201 74324 Differences There are couple of reasons that the numbers reported by the BIND 9 statistics channel might differ from the memory usage reported by the operating system.\nExternal libraries\nBIND 9 uses several external libraries - OpenSSL, libuv, libxml2, json-c and possibly others. All these also need memory from the operating system to operate. The difference should not be large, but it\u0026rsquo;s also not negligible. If the difference between the used memory reported by the internal statistics channel and USS is large (on a busy server), then congratulations, you\u0026rsquo;ve found a leak in an external library. (NOTE: BIND 9.19 - the development version - provides its own memory context for OpenSSL, libuv, and libxml2 if the library versions are recent enough.)\nMemory fragmentation\nThere\u0026rsquo;s quite a lot of churn in memory allocations and deallocations on a busy server, and memory gets fragmented - the default Linux allocator isn\u0026rsquo;t particularly good with the BIND 9 memory usage patterns. Using jemalloc is strongly recommended as it handles memory fragmentation much better and is also faster.\nMemory Profiling When compiled (or even linked using LD_PRELOAD), jemalloc can produce heap snapshots based on triggers (time, size, \u0026hellip;). This can be later analysed using the jeprof tool to see where the memory went.\nThe basics are:\nexport MALLOC_CONF=\u0026#34;abort_conf:true,prof:true,lg_prof_interval:19,lg_prof_sample:19,prof_prefix:jeprof\u0026#34; export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2 # you don\u0026#39;t need that if compiled with jemalloc /usr/sbin/named # use your normal options and configuration that you use in production You\u0026rsquo;ll probably need to fine tune the lg_prof_interval and lg_prof_sample numbers (it\u0026rsquo;s log base 2) to get the desired file size.\nAfter running the benchmark or a regular workload, you should end up with bunch of jeprof.\u0026lt;pid\u0026gt;.\u0026lt;m\u0026gt;.i\u0026lt;n\u0026gt;.heap files. Pick the latest and run:\njeprof \\ --show_bytes \\ --nodefraction=0 \\ --exclude=\u0026#34;default_memalloc|mem_get|isc___mem_get|isc__mem_get|mem_allocateunlocked|isc___mem_allocate|isc__mem_allocate|isc___mem_strdup|isc__mem_strdup\u0026#34; \\ /usr/sbin/named **HEAP FILE** --pdf \u0026gt; \u0026#34;jeprof.pdf\u0026#34; More options can be found in the jeprof manual page. These must be interpreted with knowledge of the BIND 9 internals. That said, if you are reporting what you think is a memory issue, attaching output of the jeprof will certainly help.\nGraphs Resolver Benchmarks Below is a graph comparing memory usage in BIND 9.11, 9.16, 9.18, and 9.19 (aka main).\nAs you can see, 9.18 and 9.19 memory usage is in the same ballpark as 9.11. 9.16 uses twice as much memory (because it uses double the number of worker threads, as explained above). The following two graphs show that despite this improved memory usage, 9.18 and 9.19 actually have lower latency than 9.11.\nThis blog was originally posted on the BIND 9 development wiki page. Check there for further updates if this topic is very interesting to you.\n","permalink":"/blogs/2023-BIND-memory-management-explained/","tags":["BIND","Open Source"],"title":"BIND Memory Management Explained"},{"categories":["BIND","Kea","DHCP","Company News"],"contents":"At the end of 2022, we are introducing some changes to the tools and procedures we use for signing the source code releases of our software, to make the whole process simpler and more robust.\nWhat Is Going to Change and Why? In the past, during any given one- or two-year period, ISC used a single code-signing PGP key that was rolled over after reaching its intended expiry date. Since 2019, each PGP key also had signing-only subkeys associated with it, to help limit the exposure of the primary key itself. Unfortunately, this approach turned out to present practical organizational challenges related to handling the primary key in a geographically-dispersed environment. We have also been publishing multiple signature files along with each source tarball, each of which was prepared using a different hashing algorithm (SHA-1, SHA-256, or SHA-512).\nWe have decided to simplify our approach to signing our source code releases. We are migrating to a new set of independent signing keys, each of which:\nwill be stored on a hardware token, will not have a preset expiry date, is expected to remain in use until revocation. A PGP public key block containing the new set of signing keys is available on the ISC website, along with a list of key fingerprints. Both of these will get updated over time as the set of PGP keys expected to sign source code releases of ISC software evolves.\nWhen Are the Changes Going to Happen? The timeline for the migration to the new set of signing keys is as follows:\nDecember 2022 through January 2023: each source code release of ISC software will still be accompanied by four signature files; among those, however, there will be both files prepared using the 2021-2022 code-signing key and files prepared using the new code-signing keys:\nthe *.asc file will contain signatures prepared using one of the new signing keys,\nthe *.sha1.asc, *.sha256.asc, and *.sha512.asc files will contain signatures prepared using the 2021-2022 code-signing key.\nFebruary 2023 onward: with the 2021-2022 code-signing key expiring on February 1st, 2023, each source code release of ISC software made after that date will only be accompanied by a single SHA-512 signature file, *.asc, which will be prepared using one of the new signing keys.\nWhat Do I Need to Do? If you are getting ISC software from precompiled packages (either those provided directly by ISC, or those provided by your operating system vendor or published in third-party package repositories), you do not need to do anything; the changes described above only apply to source code releases. The keys used for signing ISC-provided packages will not be affected.\nIf you are a packager of ISC software or you are building and installing ISC software directly from source, you may need to adjust your build recipes and/or procedures to account for the above changes. Depending on your current setup, you might encounter signature verification errors while processing ISC software released starting in December 2022, until the new signing keys are added to your local set of PGP keys expected to sign source code releases of ISC software.\nPlease feel free to reach out to us by email to info@isc.org, or via your regular support channel if you have any questions about the planned changes.\n","permalink":"/blogs/2023-changes-to-isc-software-signing/","tags":["BIND","Kea","DHCP","New Releases","Open Source","Security"],"title":"Changes to ISC Software Signing"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"There was some good news this year, and plenty of bad news. But through it all our faithful pets kept us company, offering us warmth, cuddles, and lots of great photo opportunities. So ISC\u0026rsquo;s 2022 holiday card pays tribute to the Pets of ISC.\nNo matter where you are and no matter who keeps you company as you work, all of us at ISC wish our customers and users a very happy holiday season and a year of good health and good companionship in 2023.\n","permalink":"/blogs/2022-holiday-card/","tags":["holidays","2022","ISC"],"title":"Happy Holidays from ISC!"},{"categories":["BIND","Open Source","Events"],"contents":"Aliasing in the DNS and HTTPS/SCVB RRs It would be very handy to be able to establish an alias for an apex record in the DNS. Unfortunately, this is not a standardized feature of the DNS, although some operators have created proprietary features with this effect.\nThe new HTTPS and Service-binding (SCVB) resource records on the brink of standardization in the IETF are supposed to \u0026ldquo;solve\u0026rdquo; this long-standing interoperability problem. In this webinar, Carsten explains:\nWhat is a CNAME? What is an alias? What is the problem with CNAMEs at the zone apex? How is this new HTTPS/SCVB system supposed to work? What do we know about implementation status? **A recording of this webinar can be found on ISC\u0026rsquo;s Youtube channel. **\nMemory management in BIND 9 Users frequently ask us how to optimize BIND 9 memory usage, for the best performance with a \u0026ldquo;safe\u0026rdquo; amount of overhead. This is an impossible question to answer! The truth is, every installation is different and the answers are platform-specific. So how can you measure memory usage on your platform, and how can you adjust BIND memory usage if you need to? Find out at our December webinar!\nHow should you measure BIND 9 memory usage (Authoritative and Resolver)? Why does BIND 9.16+ use more memory than older versions? What is jemalloc and how does it help with BIND 9 memory usage? What BIND features can you adjust to change BIND\u0026rsquo;s memory usage? A Recording of this webinar can be found on ISC\u0026rsquo;s Youtube channel. Slides are here\nSpeaker information: Carsten Strotmann has more than 20 years of professional experience working in open source networking. He has delivered many webinars and training classes on open source solutions and DNS/DHCP/IPv6/Linux/Unix security. He also is the author of various articles on IT security topics in computer magazines.\n","permalink":"/blogs/upcoming-webinars-2022/","tags":["BIND","Webinars","Events","Training"],"title":"Fall 2022 DNS Webinars"},{"categories":["BIND","Open Source","Events"],"contents":"DNS Wildcards A \u0026ldquo;wildcard\u0026rdquo; is a DNS resource record whose owner\u0026rsquo;s name starts with an asterisk. It sounds like a very simple hack to save time when creating a lot of similar names, but there may be cases where a wildcard is not what you want.\nWe\u0026rsquo;re sorry you missed our webinar, but you can still view the recording at: https://youtu.be/B_V16ehQIIs\nOur presenter, Carsten Strotmann, covered:\nWhat are wildcards good for? How are wildcard names expanded? Can wildcard domains have subdomains? Can wildcard domains be empty non-terminal zones (ENTs)? What about wildcards and NSEC/NSEC3? Speaker Information: Carsten Strotmann has more than 20 years of professional experience working in open source networking. He has delivered many webinars and training classes on open source solutions and DNS/DHCP/IPv6/Linux/Unix security. He also is the author of various articles on IT security topics in computer magazines.\n","permalink":"/blogs/dns-wildcards-webinar-2022/","tags":["BIND","Webinars","Events","Training"],"title":"DNS Wildcards Webinar"},{"categories":["ISC","Open Source"],"contents":"ISC is hiring another technical support engineer! The technical support team is the heart of ISC: the revenues from technical support services are what fund ISC\u0026rsquo;s operations. Our support customers also are frequently the ones who alert us to subtle, hard-to-find software problems, and work with us to resolve them - for the benefit of all users.\nThe team right now includes six people, two of them part-time. We need someone who can work US business hours, and take turns being on-call during off-hours, to help us maintain our worldwide coverage. We don\u0026rsquo;t care where you live, as long as you have excellent, reliable Internet access.\nWe are a very friendly and fun group, with flexible policies and generous benefits. We all work 100% remotely, and usually get together once or twice a year in person in some cool place.\nHow can I know if I could do the job? Our support engineers have nearly all been successful systems administrators in the past, most for several networks of increasing size and complexity. Our most senior support engineer, however, has a background in technical support, so that obviously works too. The people who like this job, and do it well, seem to be either motivated by their interest in solving problems, or by a desire to help people. Or both!\nObviously, successful applicants need to have a very broad grasp of networking fundamentals, know the differences between e.g. TCP and UDP, have fairly good Linux/Unix-foo, have extensive experience with either DNS or DHCP in particular, and be good at describing solutions to customers in English. We are very flexible about educational background and past experience in hiring, but we care a lot about the applicant\u0026rsquo;s responses to our technical screening questions. We also would love to see any examples you can share of presentations, papers, or anything else like that where you are explaining something technical.\nHow can I tell if I would like this? The people we have hired for this position in the past have stayed with us for many years, which is some evidence that they enjoyed the work. Some of them even leave and come back again!\nThis is not a phone-bank kind of support job: you will probably not handle more than one or two cases a day, most of your communication will be via email, and you will have time to do some testing and research to help the user. Our support customers are generally very well qualified and experienced sysadmins themselves, and when they need help, there is usually a subtle problem lurking in the configuration or there is a bug. Sometimes, they are trying to do something that our software just doesn\u0026rsquo;t do, in which case it will be time for you to turn on the charm and convince software engineering to enable that.\nIn between handling customer issues, you will have time to write technical articles, read and respond to users on our public mailing lists, test new software versions and generally keep your skills up. We have a very active internal chat network, and you will have regular meetings with the software developers to discuss the trickier support issues.\nI\u0026rsquo;d like to apply! If this sounds like it might be fun, we encourage you to apply. We generally interview everyone who makes a reasonably good showing on the screening questions. Seriously, don\u0026rsquo;t skip those.\nThe current list of jobs available at ISC is always here.\n","permalink":"/blogs/isc-hiring-2022/","tags":["Open Source"],"title":"ISC is Hiring"},{"categories":["DHCP","ISC DHCP"],"contents":"What is changing? The 4.4.3-P1 and 4.1-ESV-R16-P2 versions of ISC DHCP, released on October 5, 2022, are the last maintenance versions of this software that ISC plans to publish. If we become aware of a significant security vulnerability, we might make an exception to this, but it is our intention to cease actively maintaining this codebase.\nDoes this mean ISC DHCP won\u0026rsquo;t work anymore? No. The existing open source software will continue to function as it has, and current operators do not need to stop using ISC DHCP. Many networks with stable DHCP systems that are well-isolated from the open Internet can continue using ISC DHCP as long as their current systems function. However, it is time to start thinking about a migration plan to a more modern system that is actively maintained.\nNetwork and system administrators deploying DHCP in new environments should look beyond ISC DHCP for a solution, as it would be irresponsible to invest in new deployments of this software which is now end-of-life. Naturally, ISC suggests new users consider our Kea DHCP server, but there are alternatives.\nWhy are you doing this now? It is simply time to end this project. ISC DHCP is very mature code. It is not designed or instrumented for testability, so at this point, it is very risky to modify anything. As a result, we have made fewer and fewer changes to it in recent years. Since 2015, ISC has been investing in creating a new DHCP server, Kea, that we are actively developing and plan to maintain for the foreseeable future. We want to be very clear that we recommend Kea over the much older ISC DHCP version.\nWhat is not changing? The ISC DHCP GitLab repository hosted by ISC will remain open and available for logging issues and submitting patches. We do not plan to work on these issues or patches, but the repository will serve as a community resource for information for users. Users reporting suspected security bugs should NOT report them via the open repository, but should continue to report these to security-officer@isc.org, via encrypted email. See https://www.isc.org/reportbug/ for specifics. Since ISC DHCP is open source, anyone who wishes to fork it and continue maintaining it, according to the MPL 2.0 license requirements, is welcome to do so. Already-released versions of ISC DHCP will remain available on our FTP server and download server. The user mailing list hosted by ISC at https://lists.isc.org/mailman/listinfo/dhcp-users will remain open. ISC will continue to provide Professional Support Services to our existing customers, including patches for critical bugs or security vulnerabilities. This announcement does not imply any change to our support services. ISC DHCP\u0026rsquo;s history and evolution The ISC DHCP server was originally written for Internet Software Consortium (ISC) by Ted Lemon and Vixie Enterprises with the goal of being a complete reference implementation of the DHCP protocol. They maintained it through version 3.0, which had its final release in January 2003. Since then, ISC DHCP has been maintained, and the DHCP protocol extended, by a dedicated team of engineers at ISC.\nThe first release of the ISC DHCP distribution in December 1997 included just the DHCP server. Release 2 in June 1999 added a DHCP client and a BOOTP/DHCP relay agent. DHCP 3 was released in October 2001 and included DHCP failover support, OMAPI, Dynamic DNS, conditional behavior, client classing, and more. The 4.0 release in December 2007 introduced DHCPv6 protocol support for the server and client. The client and relay components reached their End-of-Life in January 2022.\nThe development of ISC DHCP paralleled the development of the protocol in the DHC working group (WG). The DHC working group was founded in 1989 by Ralph Droms, who also wrote IETF RFC 1531, the first version of the Dynamic Host Configuration Protocol; it was standardized in October 1993. DHC is now the oldest WG that still functions.\nThis is the first version of ISC DHCP recorded in CVS, later converted to git: https://gitlab.isc.org/isc-projects/dhcp/-/tree/d7837182d3cc1aa28fec66ebb2e23f759ea003b9 (note the minutes of a DNS working group meeting in 1993 and a DHC working group from 1994 in the docs folder).\nAt that time, according to the meeting minutes, there were several DHC server implementations: from FTP Software and from Microsoft for WindowsNT; from SunSoft for Solaris; from Novell for Netware; from IBM for OS/2 and AIX; and from WIDE for UNIX, BSD, SunOS and HP/UX.\nISC DHCP has had four primary authors: Ted Lemon 1995 - 2002 David Hankins 2005 - 2010 Shawn Routhier 2008 - 2016 Thomas Markwalder 2014 - present Other contributors include: Mark Andrews, James Brister, Ben Cottrell, Francis Dupont, Evan Hunt, Shane Kerr, Tomek Mrugalski, Damien Neil, Jeremy Reed, and Paul Selkirk. This doesn\u0026rsquo;t include the many people who have advised and assisted ISC DHCP users over the years on the dhcp-users mailing list. Simon Hobson, Sten Carlsen, Glenn Satchel, Phil Mayers, Gregory Sloop, and Steinar Haug, along with many others, have generously shared their time and expertise with other users on the list, which will remain open and, presumably, active.\nHere are some comments from the four core maintainers.\n1. Ted Lemon, original author 1995 - 2002 Ted Lemon remembers:\nThe way the DHCP work came about was that I was between jobs, and Paul (Vixie) knew me from the time when we both worked in IT at Digital Equipment Corporation (now defunct) in Palo Alto. I asked him if he had anything interesting going on, and he mentioned that he thought it would be useful to do a DHCP server to go alongside BIND 8, and asked if I would be interested in working on that. In all honesty I had not been a fan of DHCP prior to that time, but it seemed like an interesting project, so I said yes. Development proceeded fairly rapidly, and we quickly landed on the much-loved sequential text file as database model of persistent storage. The rest is history.\nIf I had it to do again, I probably would not write it in C, but I have no desire to go back to that code. It amazed me that a company in Japan actually managed to run 600k clients on an ISC DHCP server by using really long lease times.\n2. David Hankins, primary maintainer 2005 - 2010 David Hankins took over ISC DHCP maintenance as an employee of ISC.\nISC was approached by Comcast in 2006 to implement DHCPv6, as they were looking for an open source DHCPv6 server to test IPv6 functionality on DOCSIS 3.0 modems. ISC released ISC DHCP 4.0 in December 2007, adding support for DHCPv6. David said, \u0026ldquo;I remember everyone tolerating my whimsy in making ISC DHCP v4 the release that implemented IETF DHCPv6.\u0026rdquo; Shane Kerr implemented the server support for IPv6 while David updated the client. Evan Hunt and Francis Dupont helped with the addition of DHCPv6 support. (Please see https://www.isc.org/blogs/isc-dhcp-and-ipv6-the-dhcpv6-story/ for more details.)\nOne feature of ISC DHCP that required many tweaks over the years was failover. David remembers:\n\u0026hellip; there was an IETF draft circulating for a standard failover protocol which Ted implemented. It wound up expiring on my watch as there was little interest in an inter-vendor interoperable failover feature; folks seemed to want failover among the same vendor, or at least the vendors thought so. The protocol was finicky, and we wound up having a series of problems with \u0026ldquo;drift\u0026rdquo; between failover servers, ultimately running out of leases, etc.\nWe asked David why he thought ISC DHCP was so popular.\nI think ISC DHCP was popular because the (complex) configuration language made a lot of custom integration possible. On the client, Ted chose to integrate with the operating system by using shell scripts, which made it easy to make local integrations - this and client-side implementations of Dynamic DNS bindings are probably what made it popular at the time. The relay was never really popular - people mostly used their ethernet switches features for relaying - but it provided value by being a \u0026ldquo;tool in the toolbox\u0026rdquo; so to speak, useful in a pinch.\nISC DHCP sort of fit a niche where it performed well enough at large scale, and wasn\u0026rsquo;t too overly large to use for your home network, but was a bit too big for an embedded system with limited resources. The ISC Kea server has developed into a superior platform, and it feels to me like the time is right to set ISC DHCP aside.\nWe shouldn\u0026rsquo;t mourn that we\u0026rsquo;re saying goodbye to a stout little piece of software that\u0026rsquo;s served most of the Internet and kept people connected. Instead, we should celebrate that the world is losing a Turing-complete configuration language. :)\nWhen David Hankins left the project, he wrote a \u0026ldquo;Handoff\u0026rdquo; document, which is archived on the wiki page for the development repository. This includes some humorous commentary on weaknesses of the DHCP protocol development process and ISC DHCP\u0026rsquo;s implementation in particular.\nHe now works at Twitter, automating their infrastructure. He is still sharing funny stories about his work as @hcr64.\n3. Shawn Routhier, primary maintainer 2008 - 2016 Shawn Routhier took over from David Hankins when David left ISC. Shawn remembers his primary feature contribution to ISC DHCP as adding support for asynchronous DDNS, included in DHCP 4.2, released in 2009. Shawn also extended the IPv6 support, in the major 4.3.0 release. In addition, during his tenure, Shawn supported several companies who implemented commercial IPAM products, using ISC DHCP and BIND 9. During those years there was a lot of push and pull to accommodate various non-standards-compliant clients, and to adjust the failover behavior. Now, Shawn can be found at Infoblox, where he is still supporting this software, as well as newer Infoblox products incorporating ISC\u0026rsquo;s Kea DHCP server.\nShawn said, \u0026ldquo;I\u0026rsquo;m sad to hear that ISC DHCP is being EOLed but I guess all good things must come to an end.\u0026rdquo;\nAmong other accomplishments during Shawn\u0026rsquo;s tenure, ISC DHCP turned 18 in November 2013 and ISC DHCP 4.3.0 was released in February, 2014.\n4. Thomas Markwalder, primary maintainer 2014 - present Thomas joined ISC to work on BIND 10 and Kea, and was asked to backup Shawn on ISC DHCP to improve the \u0026ldquo;bus factor.\u0026rdquo; Thomas Markwalder offered these reflections on ISC DHCP:\nISC DHCP code harkens back to the days when C++ was bleeding edge and poorly supported, HTTP wasn\u0026rsquo;t even an acronym yet, and a lot of the object-oriented constructs, patterns, and libraries that engineers take for granted now weren\u0026rsquo;t even beer stains on cocktail napkins yet. Even so, Ted Lemon emulated a lot of those concepts manually, in C. OMAPI is a great example. It resembles a RESTful API for manipulating \u0026ldquo;objects\u0026rdquo; in ISC DHCP\u0026rsquo;s problem space and yet it was authored when \u0026ldquo;restful\u0026rdquo; was still how you hoped your next vacation was going to be. The configuration language in ISC DHCP is another forward-looking feature: it\u0026rsquo;s actually a pretty potent 4GL whose flexibility has been a hallmark of ISC DHCP from the beginning. Nearly every approach in the code base seems to have been created to solve things as extensibly as possible long before most folks even thought about such things.\nAnd while some people, myself included, may grumble a bit about the code base being inscrutable, it delivers an awful lot of bang for the buck, still compiles on a wide variety of platforms, functions reliably, and yet offers more than 31 flavors for doing things. I doubt anyone had the faintest idea that it would become as prolific as it has nor still be in use decades after its inception.\nThomas Markwalder\u0026rsquo;s DHCP Developers\u0026rsquo; Survival Guide is posted on the GitLab wiki page for ISC DHCP. Thomas remains at ISC, where he helps to support our ISC DHCP support customers and works on Kea DHCP.\nHere are some ISC DHCP highlights from Thomas\u0026rsquo;s tenure as primary maintainer:\nThe ISC DHCP 4.4.0 branch, first released in early 2018, was re-licensed under the MPL 2.0 open source license. (Earlier versions remain under the original ISC License.) At that point, we announced that this would be the last supported branch of ISC DHCP.\nIn June of 2021, we announced that the ISC DHCP client and relay components were End-of-Life. At the time, we tried unsuccessfully to find someone who might want to take over that code maintenance.\nThis announcement should not come as a surprise to any active users of ISC DHCP. However, if you have questions or concerns, please feel free to either discuss them on the dhcp-users mailing list or email us at info@isc.org.\n","permalink":"/blogs/isc-dhcp-eol/","tags":["dhcpd","ISC DHCP"],"title":"ISC DHCP Server has reached EOL"},{"categories":["ISC","Open Source"],"contents":" Each quarter we like to profile one of our engineers, to acquaint our customers and users with some of the people they may interact with as they use our software.\nThis October is the 25th birthday of one of the Internet\u0026rsquo;s most notorious email addresses, dot@dotat.at. It belongs to Tony Finch, who joined ISC in March.\nTony has been a regular on the bind-users mailing list for many years, and one of the most prolific external contributors to BIND. His first change to the code was committed in 2011, adding line-editing support to nsupdate.\n\u0026ldquo;My career has been about building and operating distributed systems using open source software,\u0026rdquo; says Tony. \u0026ldquo;I\u0026rsquo;ve been lucky to have interesting jobs where we pushed the software hard, and often needed to adapt the code so that it worked the way we needed. Sometimes that meant improving the documentation, sometimes fixing usability papercuts, sometimes adding features to the core request handling loop.\u0026rdquo;\nAn example is BIND\u0026rsquo;s minimal-any feature, for which Tony is mentioned in RFC 8482. \u0026ldquo;One of our secondary zones was the target of a DNS reflection attack which caused severe problems for one of our DNS servers. I hacked a quick-and-dirty minimal-any patch into production to stop the attack early, then polished it up into something other people can use to defend themselves too.\u0026rdquo;\nBefore joining ISC, Tony worked at the University of Cambridge. \u0026ldquo;My first task when I joined in 2002 was to build a spam filtering system using SpamAssassin, ClamAV, and Cambridge\u0026rsquo;s own mail software Exim. The mail servers had to run their own DNS resolvers to take the heavy load of all the mail domain and DNSBL checks.\u0026rdquo;\nCambridge University was an early adopter of DNSSEC. Tony\u0026rsquo;s colleague Chris Thompson signed the University\u0026rsquo;s zones in 2009. \u0026ldquo;That renewed my interest in the DNS, and soon I was signing my own zone dotat.at and running the latest BIND with DNSSEC validation enabled on the mail servers. When Chris retired in 2014 I was the obvious person to take over responsibility for the University\u0026rsquo;s DNS.\u0026rdquo;\nISC is a big change for Tony. \u0026ldquo;The University was bureaucratically reluctant to let me go to conferences, or spend any money for that matter. Even renewing domain names was a chore! To my surprise, before I officially started at ISC I was already booking a trip to Vienna for an IETF meeting.\u0026rdquo;\nIt is a change of scope too, moving from a broad operations role to more focussed software engineering. \u0026ldquo;One of my side projects for several years has been a data structure called a \u0026lsquo;qp-trie\u0026rsquo;, which is beautifully well-suited to be the core data structure of a DNS server. Knot DNS has used a qp-trie for years, and I hope I can adapt it to improve BIND too. BIND\u0026rsquo;s rbtdb is notoriously difficult, and my new team mates at ISC have high hopes that we can simplify it. It\u0026rsquo;s a fun challenge, and as one of the winners of the International Obfuscated C Code Competition I certainly know what kind of code to avoid at all costs.\u0026rdquo;\nTony lives in Cambridge with his wife Rachel, two children, and a lot of books. \u0026ldquo;Rachel and I first met at a Terry Pratchett fan convention, and we\u0026rsquo;re looking forward to WorldCon in Glasgow in 2024!\u0026rdquo;\nYou can find out more about Tony\u0026rsquo;s work on his blog, https://dotat.at/@/, and he is active on Twitter as @fanf.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-finch/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Tony Finch!"},{"categories":["BIND","Performance"],"contents":"On July 30, 2022, Petr Špaček spoke at the DNS-OARC38 conference about the performance effects of DNSSEC validation in BIND 9. This article recaps the highlights of his presentation for those who may be interested. The slides and a recording of the full presentation are available on our Presentations page.\nDNSSEC validation: performance killer? Traditional wisdom tells us that DNSSEC validation uses more computer resources than non-validated responses. But is it true?\nFirst, we have to decide on what resources we\u0026rsquo;re talking about. Then, how do we know how much we need? Let’s be scientific and run some specific tests, looking at latency, bandwidth, CPU consumption, the number of operating system sockets, and memory.\nEverything starts with the dataset: realistic data is crucial. Fortunately, a real European telco gave us actual, anonymized data with a mix of landline and mobile clients.\nIn ISC\u0026rsquo;s tests, we looked at opposite ends of the data spectrum: a lightly loaded server handling 9,000 queries per second (QPS), and a heavily loaded resolver seeing 135,000 queries per second.\nA note about QPS: all queries are not equal. Different queries require different amounts of resolver resources, depending on whether there\u0026rsquo;s a cache hit/cache miss, the answer size, and so on.\nWe took our huge dataset and downsampled it into smaller sets. We split them into complete sets of queries by source IP address; if the client IP address was included in the dataset, all the queries were included, and if the client IP address was not included, no queries were included. This was important because we needed to keep the cache hit/miss rate exactly as it was in the live dataset.\nTest setup Our test resolver was running BIND 9.18.4 (the latest stable version at the time), and we started each test run with an empty cache. We tested two configurations: one with validation enabled, one without; everything else remained the same between the two sets of tests. The queries were simulated using the DNS Shotgun software tool, which replays a packet capture (pcap), then records the exact timing of queries and replies to the resolver.\nThen we come to the harder part of our setup: the Internet. We need authoritative servers, including timing, latency, reliability, etc. – which are hard to measure and simulate, so our test server is actually measured against the live Internet. Of course, that causes lots of noise. To counter that, we repeated each measurement 10 times and then performed some post-processing.\nWe were concerned with resources consumed on the resolver side, so we needed tools to monitor resource consumption. We wrote scripts that scraped files in /proc and /sys and stored them on disk with timestamps; when everything was complete, we post-processed the raw test files to generate charts.\nScenario 1: 9000 QPS Note that the 9000 QPS is not a steady query rate like you would get from DNSperf or a similar tool. The packet rate from the pcap we used as input for DNS Shotgun jumps around, like a live network.\nLatency Since we\u0026rsquo;re talking about DNS, we\u0026rsquo;re obliged to be obsessed with latency. Traditional DNSSEC validation wisdom tells us that there should be a significant increase in latency because validation is complex, but our experiment doesn’t agree: the latency we saw with both validating and non-validating resolvers was practically the same. In both cases, 90% of the queries were answered within 1 ms; only 10% took 1 ms or more to receive a response, in the first minute, with a completely empty cache.\nAt the last minute of the 10-minute test, the chart lines are even closer – nothing to see. For 9000 QPS, DNSSEC validation causes no difference in latency.\nNumber of OS sockets Traditional wisdom says we’ll need more TCP sockets, because DNSSEC packets are larger. To measure that, we focused on the number of TCP sockets in use, as reported by the operating system. But once again, we see practically no difference between the validating and non-validating use cases. If we focus on just the last minute and average the number of sockets in use, the numbers for validating and non-validating are exactly the same.\nThe number of sockets actually in use at any given moment might not reflect the number of sockets over the test period, since some sockets might be so short-lived that they wouldn’t be recorded. To account for this, we measured the number of TCP sockets in a “time-wait” state; that is, sockets that were closed by the resolver once a DNS transaction was finished and remained in the “time-wait” state for a short period of time after closing. This was also measured at the OS level; there is some difference, but only about 14 sockets on average on a resolver that’s handling 9000 QPS, which is insignificant.\nFor UDP sockets, the story is the same: exactly the same number of sockets in use in the last minute of the test, on average.\nBandwidth Traditional wisdom here tells us that DNSSEC validation will cause more packets to be sent; but again, this was not the case in our experiment. There was an extremely small increase in the first couple of seconds of the test, but after that we saw no difference.\nSo bandwidth used must increase, right? Again, not really. We saw a tiny spike in the first second, but once again, by the last minute, there is no practical difference between validating and non-validating.\nCPU consumption If we listen to the conventional wisdom, then DNSSEC validation must be CPU-intensive. But yet again, that is not the case. We may have seen a slight difference in the first minute or two, but the data quickly converged with the values for the non-validating configuration, and by the last minute there is a difference of only 2% on average. In this case, 100% means one CPU core, so 2% is practically nothing.\nMemory use Last but not least: memory use – finally, somewhere where validating makes a visible and reproducible difference, outside the margin of error! Validation uses approximately 10% more memory on average than on a non-validating resolver during the last minute of our tests.\nScenario 2: 135K QPS In this set of tests, we ran the exact same set of experiments as before, but this time the resolver was heavily loaded.\nLatency With a completely empty cache, validation does make a difference in the first minute; the percentage of responses that did not arrive within the client timeout (the typical Windows client timeout is 2 seconds) increased. But we notice that in the very first minute, both validating and non-validating resolvers can’t keep up: they\u0026rsquo;re not warmed up and the cache is empty. More than 2% of queries were not answered in time for both validating and non-validating resolvers, so the issue is not with validation.\nBut in the second minute, it’s just slightly worse for validating, and as time goes on their performance converges. By the tenth minute, it’s hard to say if there’s a meaningful difference or not.\nWe can conclude that if there is any difference at all, it’s an approximately 1 ms penalty for approx. 1-2% of queries. That’s the only impact of DNSSEC validation on latency for a busy resolver handling 135,000 QPS.\nNumber of OS sockets Looking again at TCP sockets, is there any difference on the resolver with lots of queries and validation enabled? There is, but it\u0026rsquo;s very small: only five more sockets in use, on average, on a resolver handling 135,000 QPS.\nIf we look at TCP sockets in the time-wait state, there are about 200 more TCP sockets in use on the validating resolver - but again, this is not a significant number, and does not require any OS configuration changes to enable more open file descriptors at the same time.\nAs for UDP sockets, we again see only a very small difference between the validating and non-validating resolvers.\nBandwidth When we look at the number of packets being sent out with 135,000 QPS, we do see a small difference. However, it\u0026rsquo;s approximately 50 packets per second more being sent out from the validating server than from the non-validating server, which is not even noticeable.\nAnd for the number of bytes received by the server, the difference is not measurable.\nCPU consumption Surely, the CPU must be burning now, right? Again, that is not the case. We saw a slight difference in CPU usage in the first couple of minutes of our tests, but after that the two sets of data quickly converge. By the tenth minute, we saw only a 22% increase in CPU time consumption by the validating resolver, which is one-quarter of one CPU core - nothing you would notice in practice.\nMemory use Once again, this is the only chart where we see any significant difference between validating and non-validating. With 135,000 QPS, memory use increased by approximately 9% for validating servers, averaged over the last minute of the test. Of course it requires more memory to store the larger amount of data, but not a very significant amount.\nSo is DNSSEC a performance killer? No. DNSSEC validation has a negligible impact on server latency, bandwidth usage, CPU time consumption, and the number of OS sockets. The only area of our tests in which we saw a significant impact from DNSSEC validation was memory consumption, which increased by about 10% with validation.\nFor those who still think this too hard to believe, it’s worth considering that:\nDNSSEC validation costs are paid only on cache misses — and DNS cache is incredibly efficient! Even in the first minute of the test, more than 90% of queries are answered from cache and the ratio increases as time progresses. DNSSEC metadata, which requires a separate round-trip to obtain (DNSKEY records), is very rare and shared on the DNS zone level, which makes the DNS cache even more efficient. If you are looking for the truth and not speculation you can also measure the impact yourself, using your own traffic capture. The measurement tools are open source! See this video. For more background on resolver performance testing and this test bed, see this earlier blog.\nThis article recaps the highlights of a presentation given at DNS OARC 38. The slides and a recording of the full presentation are available on the ISC presentations page.\n","permalink":"/blogs/dnssec-validation-performance-july-2022/","tags":["BIND","performance"],"title":"Performance Effects of DNSSEC Validation - July 2022"},{"categories":["Kea","Open Source"],"contents":"Kea 2.2 is our newest stable branch of the Kea DHCP server, suitable for production deployment.\nKea 1.8 is now EOL. If you are running Kea 1.8 or an older version, we recommend updating. Kea 2.0 will be supported until the release of Kea 2.4. See the ISC Software Support Policy for the Kea release schedule.\nKea 2.2 brings many new features developed during the Kea 2.1 development cycle to a production release. For full details of the new features, see the Kea 2.2.0 Release Notes.\nPostgreSQL backend By popular demand, we have added full support for PostgreSQL as a configuration backend, equivalent to MySQL. Most Kea configuration data can be stored in a separate MySQL or PostgreSQL configuration backend. Store options, pools, and subnets centrally and tag them based on which servers should “subscribe” for those configuration elements.\nApplication security We have made tremendous progress in providing more secure access for Kea, by providing TLS/SSL support for connections between HA pairs and between Kea and backend databases. We have also removed user authentication secrets from the Kea configuration file and put those into a separate passwords file, which can be more strictly access-controlled.\nAs previously announced, we have also now finally removed support for Cassandra as a backend. This option was unpopular, and very difficult to maintain as the nosql model is so different from our SQL backends.\nKea 2.2 adds three new hook libraries The DDNS Tuning Library adds custom behaviors related to Dynamic DNS updates on a per-client basis. Its primary feature is to allow the administrator to calculate the domain name (FQDN) to be assigned using a regular expression. This library is included in our low-cost Premium package.\nThe Limits library can rate-limit packet processing, to protect the server against overactive clients, and will also support controlling the number of leases per client. This new Subscriber hook library does not yet match the equivalent functionality in ISC DHCP, but we will continue to improve it.\nThe RBAC library allows an administrator to control authenticated user access to read and write Kea configuration data. This library is primarily useful in a large enterprise environment, and is offered to Silver and Gold support subscribers.\nSources are now available from Cloudsmith Our Cloudsmith repository for binary packages has become very popular, and we think the majority of our subscribers are now using that repository. We are adding new packages for Alpine OS users (please give us feedback!). We are also providing the source tarballs on Cloudsmith, so users can get the source as well as the binaries from the same place. These source tarballs are signed by ISC and the signatures for verification are in the repo with the tarballs.\nKea Hooks Basic Commercial End User License We have updated the basic commercial license for the non-open source hooks. (The open source hooks remain licensed under MPL 2.0, this is unchanged.)\nThe Premium hooks package, purchased online without support, is now for smaller businesses and non-profits only. There are 12 month license subscription options for 1,000, 6,000, 15,000 and 30,000 active leases at prices starting at $549. We think it is fairer for large enterprises and service providers to pay more, but we also wanted to preserve a low-cost option for universities and other non-profits, so qualifying non-profits can purchase the lowest cost option for up to 30,000 active leases.\nLarger deployments can access the Premium and Subscriber hooks as a bundle, either without support at the Basic level, or with support at Bronze, Silver, or Gold levels. Our levels are:\nBasic - premium and subscriber hooks; advance security notifications (ASNs); but no support Bronze - premium and subscriber hooks; ASNs; email support during business hours Silver - premium, subscriber, and the new RBAC hook; ASNs; 24x7 email and phone support with SLAs Gold - premium, subscriber and the new RBAC hook; ASNs; 24x7 email and phone support with faster response times Our support prices are based on deployment size, as measured by the number of simultaneous leases provided. For more information on the support options, please see our Support page and our Kea Support Subscription datasheet.\nQ \u0026amp; A What does this mean if I have a current Kea Premium Hooks subscription? Your subscription will be honored for the original period under the original terms, of course. The DDNS Tuning hook will be added to your download site. At renewal time, you will have to review and agree to the new license terms, and if you have a commercial deployment providing over 1,000 simultaneous leases, you may have to upgrade to another subscription level. Please see the text of the EULA for the special terms for non-profits.\nWhat does this mean if I am a current ISC Support Subscriber? Thank you! If you are a Basic or Bronze subscriber, you will be receiving the new DDNS-tuning and Limits hooks. If you are a Silver or Gold level subscriber, you will receive the DDNS Tuning and Limits hooks, as well as the new RBAC hook. Also, you will no longer have to search your tickets for an ftp link to download the sources for your premium software; sources are now included in our Cloudsmith repositories.\nReferences\nKea ARM Kea Release Roadmap Kea Significant Features matrix Kea Hooks Basic Commercial End User License ","permalink":"/blogs/kea-2-2-0/","tags":["Kea"],"title":"Kea 2.2.0 Released"},{"categories":null,"contents":"Kea hooks that are part of the Premium, Subscriber and Enterprise hooks packages are commercially licensed. The text of the basic license is provided below.\nKea Hooks Basic Commercial End User License Agreement v2.1.1 (HBCEULA or the License)\nPublished: February 2023\nAGREEMENT 1.1 Defined terms are set out below in Section 8.\n1.2 This Software is supplied to You under the terms of a Subscription and Your rights under this License do not commence until You have obtained a valid Subscription.\n1.3 This License describes rights granted to You by Licensor under copyright as part of the Subscription. The rights granted are world-wide, non-transferable, and without any rights to sublicense. The rights expressly described in this License are the only rights granted and You may not exceed any use expressly authorized here. For the avoidance of doubt no trademark rights are granted.\n1.4 Subscriptions often include disclosure of the Source Code Form of the Software to the subscriber. The Source Code Form of the Software remains the confidential information of Licensor, its Related Persons, or third parties.\nLICENSE GRANTS AND CONDITIONS 2.1 If Your Subscription includes disclosure of the Source Code Form of the Software to You, then during the Term and the Extended Operational Term (and not thereafter) You may: (a) use the Source Code Form of the Software to create Modifications and Larger Works; (b) translate the Source Code Form of the Software, Modifications, and Larger Works into Licensed Executables; and (c) copy the Source Code Form of the Software, Modifications, and Larger Works to the extent reasonably necessary to enjoy the rights granted in this License.\n2.2 If Your Subscription includes disclosure of the Source Code Form of the Software to You, You: (a) must keep the Source Code Form of the Software (including Modifications and Larger Works) confidential; (b) may use the Source Code Form of the Software (including Modifications and Larger Works) only for the purposes of enjoying the rights granted to You in this License; (c) may only disclose the Source Code Form of the Software (including Modifications and Larger Works) to those members of Your staff and individual subcontractors under Your control who reasonably require such access and who owe equivalent confidentiality obligations to You; and (d) must otherwise comply with any additional obligations concerning confidential information set out in the Subscription.\n2.3 During the Term and the Extended Operational Term (and not thereafter) You may copy, install, and operate Licensed Executables solely for the Authorized Purposes in Authorized System Combinations on DHCP Servers under Your control, provided that the combined total number of Active Leases managed by all of Your installations of Licensed Executables on all of Your DHCP Servers must not exceed the number of Maximum Active Leases.\n2.4 Subject to any rights given to You under applicable law, You may not decompile, reverse-engineer, disassemble, or otherwise attempt to derive the Source Code Form for any Licensed Executable supplied to You under the Subscription.\n2.5 During the Term and the Extended Operational Term (and not thereafter), You may make, retain, and operate a reasonable number of copies of Licensed Executables: (a) for test and analysis purposes in a test environment that is not in productive use (and leases issued in such test environments do not count towards the maximum number of Active Leases specified above); and (b) for backup purposes.\n2.6 This License does not grant You any rights to distribute the Software, Modifications, Larger Works, or any Licensed Executables to any third party (including Your Related Persons).\n2.7 The rights expressly described in this License (subject to any expressly expanded rights described in the Subscription) are the only rights granted to You in respect of the Software.\nINTELLECTUAL PROPERTY Nothing in this License serves to transfer ownership of any copyright or any other intellectual property rights.\nTERM AND TERMINATION Your rights under this License begin at the commencement of the Term. Your rights expire at the times specified in the License.\nMAINTENANCE AND WARRANTY 5.1 Your rights to obtain access to Software updates, patches, and fixes from Licensor, are governed by the terms of the Subscription. Your right to obtain access to such updates patches and fixes ends at the conclusion of the Term.\n5.2 THE LICENSOR HEREBY EXCLUDES FROM THIS LICENSE ANY TERMS CONDITIONS OR WARRANTIES THAT WOULD OTHERWISE BE IMPLIED BY OPERATION OF LAW INCLUDING, WITHOUT LIMITATION, TERMS CONDITIONS OR WARRANTIES OF MERCHANTABILITY, SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE, OR THAT THE SOFTWARE IS FREE OF DEFECTS, OR NON-INFRINGING.\nLIABILITY UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, STATUTORY DUTY, OR OTHERWISE, SHALL THE LICENSOR OR ANY RELATED PERSON OR ANY OF THEIR DIRECTORS OFFICERS OR STAFF, BE LIABLE TO YOU FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PERSON SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THE PERSONS DESCRIBED IN THIS SECTION MAY RELY UPON THE TERMS OF THIS LIMITATION OF LIABILITY AND THERE ARE NO OTHER THIRD PARTY BENEFICIARIES TO THIS LICENSE.\nMISCELLANEOUS 7.1 This License is governed by and shall be construed in accordance with the law of New Hampshire.\n7.2 You agree that any dispute arising out of or related to this License shall be submitted to the exclusive jurisdiction of the courts of New Hampshire and You consent to that jurisdiction. This jurisdiction provision is without prejudice to the rights of Licensor or any of its Related Persons to take legal action in any other jurisdiction for preliminary injunctive relief or post-judgment enforcement.\n7.3 The rights granted in this License are personal to You and You may not assign novate sublicense or otherwise transfer Your rights to any other person, provided however that You may reasonably transfer the whole of Your rights under the License to a solvent Related Person as part of a genuine reorganization of Your enterprise upon notice to Licensor.\nDEFINITIONS In this License:\n8.1 “Active Lease” means the lease of an IPv4 address, IPv6 address, or IPv6 prefix, which has not yet expired.\n8.2 “Authorized System Combination” means a system in which the Licensed Executable receives and responds to calls made by Related Software, either operating independently as a dynamically linked library or combined together with Related Software into a Larger Work.\n8.3 “Authorized Purposes” means only those purposes described as such in the Subscription, and if no such purposes are specified in the Subscription then this means Leasing Addresses on Your Network used to Provide Internal Network Services and no other purpose. For the avoidance of doubt, unless expressly agreed in the terms of Your Subscription the Authorized Purposes do not include: (a) Leasing Addresses on Your Network used to Provide Network Services to Third Parties and (b) OEM (Service Provision)/Leasing Addresses on a Third Party Network.\n8.3.1 “Leasing Addresses on Your Network used to Provide Internal Network Services” means assigning IP addresses on Your network that You operate solely to provide network services to Your enterprise (including Your Related Persons).\n8.3.2 “Leasing Addresses on Your Network used to Provide Network Services to Third Parties” means assigning IP addresses on Your network that You operate to provide network services to Your third party customers (whether as a public network service, private network service, or otherwise).\n8.3.3 “OEM (Service Provision)/Leasing Addresses on a Third Party Network” means assigning IP addresses on a network operated by anyone other than You (i.e., operating DHCP as a service for a third party network operator).\n8.4“DHCP Server” means one instance of a Licensed Executable running as part of an Authorized System Combination on a single computer (whether physical or virtual), and for these purposes a hardware partition, a blade, a virtual machine, and any similar environment that fulfills similar functionality, is each individually counted as a single computer.\n8.5 “Executable Form” means any form of a software work other than the Source Code Form.\n8.6 “Extended Operational Term” means the period of 5 years immediately following the end of the Term.\n8.7 “Larger Work” means a software work that You make by combining the Software with other materials.\n8.8 “License” means the text of this document and the rights granted herein.\n8.9 “Licensed Executable” means: (a) the software work that results when You translate Source Code Form of the Software, including any Modifications or Larger Works that You create pursuant to Your rights in this License, into Executable Form; or (b) the Software supplied to You in Executable Form.\n8.10 “Licensor” means the person described as such in Your Subscription who holds lawful authority to grant the License and any additional rights described in the Subscription.\n8.11 “Maximum Active Leases” means the maximum number of Active Leases specified in the Subscription and if no maximum number is specified then: (a) if You are a Qualifying Nonprofit Entity, the number is thirty thousand (30,000) Active Leases; and (b) in all other cases, the number is one thousand (1,000) Active Leases.\n8.12 “Modifications” means a software work that You make by adding to, deleting from, or otherwise modifying, the Software.\n8.13 “Qualifying Nonprofit Entity” means: (a) an entity that is registered and operated as a charity with Group Total income of less than US$1 million per year; or (b) a not-for-profit institution of higher education generally recognized as a place of scholarly research with Group Total operating expenses of less than US$100 million per year and a Group Total endowment value of less than US$250 million; or (c) an entity that has been specifically designated as such in writing by Internet Systems Consortium, Inc or one of its Related Persons. For purposes of this definition, “Group Total” means the total attributable to the entity and all of its Related Persons.\n8.14 “Related Software” means (a) software supplied by ISC used for the purpose of implementing DHCP such as Kea, together with (b) any software that interacts with that implementation such as other Kea Hooks supplied by ISC or similarly interacting software that You develop independently or that has been supplied to You by third parties with the right to do so.\n8.15 “Related Person” means, in reference to a first person, any other natural or legal person that controls, is controlled by, or is under common control with, that first person.\n8.16 “Software” means the Kea Hooks software supplied to You under the Subscription, and any updates made available to You during the Term.\n8.17 “Source Code Form” means the form of a software work preferred for making modifications.\n8.18 “Subscription” means the commercial agreement pursuant to which You obtained lawful access to the Software from a person who was authorized to enter into that agreement on those terms.\n8.19 “Term” means the term of the Subscription, commencing on the effective date of the Subscription and concluding on the end date specified in the Subscription, and if no end date is specified then concluding upon the first to occur: (a) the first anniversary of commencement; or (b) termination of the Subscription for any other reason.\n8.20 “You” and “Your” refers to the natural or legal person who is the intended and expressly declared beneficial customer for the Subscription.\n","permalink":"/kea-premium-license/","tags":null,"title":"Kea Hooks Basic Commercial End User License Agreement"},{"categories":null,"contents":"ISC\u0026rsquo;s 2021-22 Code Signing Key\n-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBF/u5KMBEAC0hPiTonjYEe5FqNzFn73KmcN8KGD2wzujmWWLnFXGEVDEpFcS ULQDshhCclwNeXUArUey4nficwpqUe+Xl2h4dP4z7yh3WiL5nA5JRjJjw8KJQGVW AkgiZTnJHH8DrzNt9LnDL516qMDJarTHemDUUUZLNxnuv0RDEhDxsXWiVCQZZcw/ 41yIY97uCf30dsDwnckVl3iEmYaGTYavWbKP60S8WaxO0YG57RI1etmlIQ0nMmka 4bvFnwwb9Jdnwle4LIiRMCGymsheaKCKrEZgIJY+idyBuExLLykiL8iNBj2Pzi7z XSCniH9qcEwfqgZlP/KZwujLhGOc4c4peNwpuDGcmYZoAsUD8CZ8H/LU1FIR2A1u /UrRREtC8nNTDGxCckSMEquHNURfMk1QmDbJ9gaa9aOk0AArxuTxyj6Cn+KQd5l5 0mN0R1sDVQq9xWdvnB7N0d3MDhnV7f19iUhi3KYvjVTkCMXjhNXjDH/KXFKoFhKa 9SkxYGfW25inwSQoqbP1TE5+rESf57bo+XFxfVQuYfVJ5BlZobz+sRl2iDQyBJDM uDFyXE/t+E76BmwyHeOI1weqUMYebqHgu0x76dTYj9yWgWdQAC1pXi15/MTIaOtQ hWezb5rkI2yZqaZLaRBOIRBIPM5C5AOjL2XbfwUuSr2W4+TvxLocxi48DwARAQAB tE1JbnRlcm5ldCBTeXN0ZW1zIENvbnNvcnRpdW0sIEluYy4gKFNpZ25pbmcga2V5 LCAyMDIxLTIwMjIpIDxjb2Rlc2lnbkBpc2Mub3JnPokCVAQTAQgAPhYhBH4ckayA MKWlnR76uXUPPIdyPkASBQJf7uSjAhsPBQkD60WABQsJCAcCBhUKCQgLAgQWAgMB Ah4BAheAAAoJEHUPPIdyPkAS0lMP/2IgMErScBUaXrZXqYXoluR8xU0p9DyZEBx+ ZGNAcJ2CTPAbn3FrkNGNpK4SOCLXEZPKOQ09umaIxl8H6uEGaTut1JLj1qGaZ8ID 4gAeQcTIN9OQA5ElQo+ci20XE9JSvzqY1zb04EkMuVL678xPCYJhUSLS0MAQkcDJ JQLN17SwNi4vGqzVhnwKUviQU9/s+LRUkThsTg4qT0fNnmGoVJXqrshxJa2ZWM6J QtOWBgJiC6xZ+zRiZS898L0tekU4o9yxtnnDWry2bI+mJbxAp94ZAXgKahOU7LKV 3SPxkx7TAng24nOWi1EaP51pe7usTFH1BR3CUHZdoIQ4xruZGkt/qPumskofzl+1 8bw1bEFbq8S6jC+twT3JUcE02HbEIbrd6l2T8pYBXaojFggGjUTSv9d5YUN5N9U/ /Qy0o3xZwHNdXLx6xSrUO+NT5JU1Nh/0sutEH7ru/YqFZof9vfCbV86y8fIOPgk8 LkJNUSu4QCJ1PHKB+fJp7yAhlPkOXNG1b9+W/hVp96rdkovpCUkLD83s+suQyJGk QB7Qpem7nS4zp7/Naui+g3M3p/uRSzZgELTnXNyY//bw9fOqx5SDLjSUslUMz+TH sFTwfo/Mot70MPHMe6aE6tdTDoJTcv4Iim/8MDhJ6yqKt8sxprataZoWwFi6zAF9 BzWkJcrbuQINBF/u5P4BEACso8iLzFJ+M1wqcsCDup+GtRMzte04CAlLmaLgyzfL 3xxBo4AUgX6UbUCGycG878JVn52S6Nsl6FlasmyH00MGjZt1CuNz4htfSmLGcBMj IwQv1CYR8bm9EPwR15NaWdgzJHShCduMHv4HdfqSa6UQfzO/P8mwioER19fkDQSE U1KsY0yl//ipWiW3ZJGShGHLnn4YbxogQtsRPESKUsQ9MtzuMt3ehGtkN4RguOXC 6pCWP8J4F9lgjSZ+uLOQKV4rmpbSMXntOJi2nu+14Zj36enW8xyAXO/w5z/wci2G LN/aa/v2a3GM3WJQsPNzpDwB+pr1n0Kp+wK6K7siVmDoV+WecD2KNNgOuSyUve7h BjWRM9W13LsgLGhKJA8yUpPvhXk91vLRUhwFJ2GUirxLPLs2TSTjHlHvhcPy6aX2 HxbHkcOt53n2h0zx7ntl1N7XHozMWmHphPsSvOZ5StuQRAFvfE63EyfR84KUPIbZ kvftbAJPKCJC8W6GqhfORzYZqldDNNva5iYHF1OItF79ZLGI56diNsBV9SOVKk4d f9Qp6urYOd+9RGQGmCQte/WSFaU9z9QYPEGl1NlmGAWt7KKyB6QXZH1oEMwXtPd8 4GQX3XGtyggEp6BGwkFFWRQzF1EZ0maRPrpN4bpQqLXSJiqQxsX+FAcOkhpo6X7b 8QARAQABiQRyBBgBCAAmFiEEfhyRrIAwpaWdHvq5dQ88h3I+QBIFAl/u5P4CGwIF CQPrRYACQAkQdQ88h3I+QBLBdCAEGQEIAB0WIQTpq255IzwEFuiZP0UMA6+pClln xAUCX+7k/gAKCRAMA6+pCllnxDtmD/0YCUccmKudW9PiQw7mI1HSuwL6aS+MlG6/ LJ79nmi6TTpe87NDcEv2bBpVWYcQK87smCxIYyuj4SCZuBQivjyuecipRoG14PUh KU8UiqdF+vKDvUAA7huOBlR4dgr7/KvjirnbwO3mGouwZszDOLvaHuO403+TPm1b mJtEA9y6Wbk/+PTtfPymQwnaiJkPhQ6Q7ZbyasRIisO3MRPacUjt2DXFi5VV/Mya 8o5Pae3zY+5SjMyE2siPnVE4/nzp424jDzSq4DGEUip/x+QYHFwxhCJmdZlRIFmn vSCAGXBpyPVbckC0Gw8kZ8HsGzNbMbx/VjDG3LFT8TR2Djsh99/6icO1J+jDkPNn IFEsYjAw7Tos5IPhIT1XkSCW84KqBG5pGI5h7fJzf19sR7Ki6XyFe6VYvggeQIS7 VN1ISl3tRN/dk0GbrKkUKr0OVfaRD0wXQHTzbec8Fs43G0z/DKoFutGB/J3yjAmw IOcP5R6rqjhVp4APQpsB51XCaaqEXaXZyMWrKILbPIjlE6FHeh1qd+zdIjullnF2 YZv89HU9dIXxKr35CM8f3BWm4D4cRjsUOWoGhMNwdHzHYOdys6T72KBK9D2irz8C L0bycjN+SIpde/auo+dQKqKD3/ipr4dyKJyOUsls9cyhxkFp031cZ5rWbXcLJ8/s 1BeVPjFCngqPD/9rMKA6kCSnTo+rSqZRxo9RlQwy4K6xfPPdHZvBi3A4UYCsurgl qLtFtGG8SMWigmUZWLT6uhsi0orR5wfG7vzajF0Hcd8yuWa4zGeu0rFJXgG64Pyj nJHtv2Tzi8DNY5Y+8mfXqUewyEUXQLxnLqpGlPjNUAJKvjm4SstNadewgWeb6F8x UQJc8owGmK5+yZQ5LZj6bjt9Dr3SCM3Og/iS5XK5POGUJgtgXLXp3uy7p9SzsJ73 qhrDII/YqSwToMu8tUv4xEGxyceVPDm+ywde5SXYmtvMYrq5DBdlalZ9kBlC5fyc IIzKoIOOkKKpa/YAyKdLTk8ZByjDk1RrdcOyP4VNpCvyisf6JPwWfKdM5mxf47hb s7zioUH7miUGA6i5TNi1e+DU2mL92sJwQ0WkHw6KaUez2Y9CaD8hZnQw/h/JcNq6 nb8y0GR8h7qWms3K0rtSs8SuDXUsdZrFAeURivccmohXddtt0FDzkheKGXs27SSl 8oOCh+jl/hEUzz2mJGFwRBo0FI5ipN51IfjhMJ8zzSmvfrtdwT2Tu6wSY9DLsYR7 0tWGOc2HA6o7kdcC1V0p2jvQct281FrC9dTXFgcDuGUBYhzEZeWwjuYQXBzMquF6 ersVnPo/Z5l1SnkK+wVBQbf4igHOaobl0AQxnb86W4CXBTZ3CvRq6o8vWbkCDQRf 7uUlARAA7oTlVZXhdVlPnSQlnI5JwovG2jEIrRifpbyavlhlosX+rgtQ5EILn0DS PJ35CNfOAeOcLQeRrJAZj6w/x9FHWfKRAHUeiTTsVDzTrDyJBCVuC40ck587KVUc GuB3vee03/y8qAczj5TZNaDdl+4qAzOFQuV4MjwJOx5fsXZw3dUAS7pw1mTkAYTh nz557buc8JJCxrebT6FvN8bugk7LJ8SYmI154Q5wCdXB6Q42sdSMFlKKPYRRmIvX vI4Ytl/J35v43gCLbXccTWQpBX+ra75sndS2hYGQhcC+WdNtt4THgU6Sb7ErpJK7 7A1r1Wf0WSioQ2VWjT0QbUE+6IXD1J8duh6ZgzuqppMm13aDdMDZGwdcxlFw+vlo bM+IAX+QgzPjslM3FHVvvfCLka+ctMO+lL0bz1G4njNEXcIAILhmoqRI4ItVH7Nl ZI3pAfLLB4qbhTKTIiS+uIoA82RU86ozr5oJZCsJa5N5EpJnYxnjv2tYhU42eh+j hyM+5ra1dXtveKvL5SkVuRUlPZvgOuwQ14Qnj6sv8CmtBpyVpupHmY2RbNtLVLdH Ix3lyQbgVo9iMJIoXiPXmcRWCgLgOeuETjFXsEcFLxuN+D0My0dtwWcg+271vtPn 0orTObxkctFK+V32ByJYxVvytNCW245bICpxCicxmh5kYEmQCnMAEQEAAYkEcgQY AQgAJhYhBH4ckayAMKWlnR76uXUPPIdyPkASBQJf7uUlAhsCBQkD60WAAkAJEHUP PIdyPkASwXQgBBkBCAAdFiEEqtu6UHTxQC97adVrxbTukxqfnf0FAl/u5SUACgkQ xbTukxqfnf2aeg//ZspIr4ETVf3ai0dXCm2Pf6gpM7QUfI9fPUHymvBhNrNhfZqN ADpzbJefzLif8as7kUr904zTc5Jse5a0MzCrMyEwTDIoCKDv2ktLq1L20bwflZs+ oP27CYC5FkJYgLYPrQZ/7hRC8EWjgn6v3seJtEo8G73kiVEBOnxVEfGZ8zxmX1Cp aOWfhiFYCmkEe6Ck9hG+OaWt7+WW0wWT1UFiluzRRAEMROcCUtyB5IPCqCH/Rz/m /bE6G+lHZo6OY/wY2q/oW2f9JB/4QyJeSI+fkjY/wDjfNQjiPMLfZctv25IeZYVY ZvIKrdnjbzRe+GwYLg5G/SbpSOEb5O55Ps8mNUpYFaMCfefW+DG48a4WyUGzFr52 BMKvHKtc6c7P3+muBAqcNZYxRqyLIQiYiV9CCjpIV1WgUeedroHUXvJF/SAvNVvB ZR00I/D2hsD9BFh3B1FEYbw7GuYuG27Z6fgRolOQUeTabjQLI386SV3IxZ1KFwm4 GU8BTbUA2zwT3hu/BaaCI5jTSLyBpdo10b1wgMEnqmXG6AbNdxFVEWwE+CE++BHW 0YBhKp8fghHwwN1fwTCV+QyA4Qn6EBVDkTrUPKqTeCmHzt3AQh8WVrsmrodyr5Yp 69LoRnlkLcGJiOCKMOmkop9Z32ckGieYHrl24Dw6hmUSWDG+pBn0ezbSPit3FhAA qD2y1VzqxsaCOD634Ltq8AbvphP8XZPrrsC3DIA36ITaCQDa5Cn7madLCXy/uP6N +tojtzXf4tUzumwGJGFLtdMXNmuEuXrj++NrU1xcscbvDn5O4NDMadwI1EDlQo7w uWK9jaQAVhF7iDEBEazZe26knQFxC0my4SyO1uQaEg3BKHj6z7dkAjzWJaQZhzql yrRzbCiVUUI8ZkrgM/+/6NJohUG/had6DoefgK6H8/yjgVx1Wtx+XAuBQ2cvclhc TAmHs128dWduNHxI2Yx+uM4kuHYpPKBwdEh91ZNeNqtBJURfSVjBCjKkTYiS7kiv XyvQOBdZVeSVpj/QoAfaUlQoBVm7aF6xf7GtYlVzjMsLYdpjXhy4ZbQQVUuPI+1f yFkw8PpASZ3gvO6KQ4V2w3hOYAxYQ1kSwTtaA7+18nyv65VolTmAotmLun94UKn7 zjopByBnC/XEqsU3tibg9A7xQ2KUpWkpmG35f4ZR9aEIxSe2Jmm+Se0JfiAq6Szf dyWvr/TzaS/BZL4WEPk2Vw/mzWEPZOscpIkBFGK+Ul7yuXvbrbwr+zmAikHmTb1V XfPb9eBnwDDuRHhLBym4FMrPjzeziAxxkScTfDjWq6rvMmaEe1CX+dj6ldx9Jp9d iUngol89eSgAQOtptjcit5o0Y0Mu/RF6KIBG89ghFly5Ag0EX+7lVAEQAKFx5asK W7A9BNKPkaXgym0AlW2szQR1nwxi3APLVLS0Al9Y/3mnBbYyO84HDr82AtMSWSMY UZIKtkUj2sVqUb+xHOPkY/MenyoBrCl2qaTVJ89nnWMUjtrX2qk0O09+ByoYXTit BVPAIZ/qZfGNB+Dsp1haNKRdowkf6WXkw7A9dHB5isVmaM/Z0THNJRHwc6mcqbEV M4fDL+OCx6m2KQHTHirk+OE9Nwral82IIqj3d5UBHmjHAbQNXTDzZbWg6tYbLN3I EYxSRQpkJZIVheyBmWFZuivm4hCDZxJlZ1sgxQeIZk6wR2LBR6ccTW6PH11PhIpr 6O8aQh8JUMg+/aJK2eQXINozYdjOTUjnWAUeUqML7Pg/vERRAgHXO9Z+NTIEWEOo Ee+8WOFmrmfjb9Uz27DtymhUjOl0ryiG6F1b90t1rZvVKWR2OaCUhICm88o3MCgb HFeOh7v3tnQb2Uot7kY1hgch6j1MNYWGb8LjwoTAmx9okEv9mh119k+SdVJP6wsX ZtL4860vTfTw6RQM7rkZBzTyf4qCvU5uRSd2u6JqtUhw4m/gkKQyW8jLEkqX7JaT +iEBgPzjALvfSWDbDgst0szqU5jltYpgjG3On7/ZGFFJrkB06orUvovxLThWWvm1 iugw4/av3n64hl/yfxvKQHLQA3Kfkjjzc3oPABEBAAGJBHIEGAEIACYWIQR+HJGs gDClpZ0e+rl1DzyHcj5AEgUCX+7lVAIbAgUJA+tFgAJACRB1DzyHcj5AEsF0IAQZ AQgAHRYhBGFPhWcuJXtdQn6ZBiGZBzrXgrS4BQJf7uVUAAoJECGZBzrXgrS4jfkP /ApYZIRnBL+LdTPYdbZDYXotkE6RO6ZsPdcV1G6na5jJ7igdVuvoz5nP3rX+oQoH 6k9DysQzyh/SkXRPnbOOyvQsI7atmH7SkhNn7ke8zmEJLzApHA0ZMGXtBJHQkZwA 5LDWIQb8HbtJTBr2DyJcQdpRmP3hHDgyYgwg0AUG/2JEwYqps+/pqJCrLSP+GLOA ia+wRH9xwv1Vl2gIxWXqEO6U3puqUg+0z1Av4Gj/xzuw1F3eLrOfgklhpASc8QtC 89kx1nhFS+OybQfRAH7YN9DKE5L1kJxQ4t+uW8TiXf9r+MdcVMEI3LATZRtgowFc 493g7EkTppmqabFns9OamyxXdIzLAKoKvykr7HPCBWUnZn2I2RrcGQltRBQlR0Mb jO+sFi89XnFPwXIw/t/9zoq1bXCGTt7H5RtrfxC1wTYXqLEdV9pptNj7j5mlff9g DMw1v3MfUxbz9gIDzs7ANnw3SkWi+d0v0bLadWdItkq2WKvvgB58NJtKPc8Jwilh nO7W31U/kv8FR9JcFXzS9+Y6ejIClF4FAwr5tK07N/xSFAKEs5kyAYEKxP6vI59m 5h+tO8cws+pi4gqfWa3t3b+dVzKl9AIkWAYjq9FvbfiqZgKTlTviSUMpmK5qJVld 72+NiolUVniJbw9Z10ps4G4zmXSl1ZxyKnehUzcKyPieEEsP/1/tctQx1LhVu0TJ RLtWrE523hqxpqDdF8/QrNp9dX3YVoEkMQW3YYir2oERtaosWXmRjldq5dNfgtwc lhG+/CP5rxNeCJlI+b64pC/yQMCrbz/V74aAipuv7ZZMflgr7ZD5i3jyM/7/AunS qOUPwkKrjetNF85eibeO7c0Y9/HhILkLQ8EoNfJshdc0/scwMZEpLHTMAHSrxCAV FuhLsF9epenA6IbtuMsp43aSxshX05RH7F94uj4VCMUSs/90viB5njItpPdZCqUH eXSvLSjxqsmS4Tz9Dn+uWvxleBLRRcpZykuNLGgwVXafWftWbA+U9KaJnDWFdzjJ +gAsWfHfFBOa1RfXYP++e+VJflcHaEZ4byLG5Zf1HqAvvcaShAVuMXY1hoYJinvh uk1zJRW9dP7apZx7BXWxbWcn8LMR5GFfunl/M2iNASmkqxJ9gvy6TBRWJu2QeNbN 5Ks0/GDUawQqvhmM3V6zFQWVsPwaHpufIaGqnKC2gXaIHXPP0ldyXdLXwgZ+6A7D IEqHQB2BDbiJtovk6GaK8PUCEHTiDmRF/mBzlpBJOn+Hc5ELufgr9E2lkrKJzFag CBCucNhVEaUedFrycxfSALing7DJPWb5cobu9K+3T9L3k57XgxSAj+g6vOxHuxHL ve1IPheCWfkKpJH5faFDWKpJYYPauQINBF/u5YABEADgWTS7wFA39XvpWNHSfAAR 2/nlGWuTvD7zoirzUwOd2+I2XYwgl910KsznhlqDrHZlqKuGRjQlbpyTbsOH2N5k IE+0uEXidU3iwslSZ33RLL0h9+czDnlgijYXLCg5ScswBEC1E/kXX685AUCTPX2n D1+Ymxxgov3AvItVxKDd3N5ERsy6hYWPK4ACXt47hJFqPfPtnQe2IdFkRm3bOuX/ X79Kb5N6cAoao65Tpsix1pm6tTNww0+THzIWzK/yhi1/tUOv/QJMEVAxeBAPr+Pm mvjHvsI9RNQt7VnoHVkqJhPDxyQZR2IOVQXvlYyCtkPA4WQlyxLzWM24TG8xhD1v zZzA8qs//o9QI8OLg2ZYxplC4lW6GEZk3GnrTXs7bW6HUq+RlayIbDw7oMs30jAv YyDdQpZrYuZvsWKbKu+65Yi3M5kW0v96LT3ueMJaL/RanL9JhAWuEqyezffsBZ5a 88/i0n9FJ8cQ1fZq2/GLq/mN2JZ3e/HSWynTnlmk+qGk2bq0cRFJNHAs2HNAm0Id pjSFCPmek9j30wp2c2knML+SsSw5h6570mwILuKwFr6i2hyFlPk4H7nP04vPQ8P2 Pu5O/Cfg9rPSBjIi9FsNS8/a29sSuOmsSGHZnMrVUpGw+iKmx/jVejOtqe6hYydu MSQtIU59E2fq5TM4tub6qwARAQABiQRyBBgBCAAmFiEEfhyRrIAwpaWdHvq5dQ88 h3I+QBIFAl/u5YACGwIFCQPrRYACQAkQdQ88h3I+QBLBdCAEGQEIAB0WIQQjoUGa YHzyVyZWN3UsTffOV4ELlAUCX+7lgAAKCRAsTffOV4ELlDerEACBP9kAH17GHloL XJjd1IHttRWU2Qs/VV0H14g14hgRz2/Qa7KRR4mGrXPKS/ctMkDXwlvs4HPUTeO4 MMT38hwxv54AjW7CtF8DR3EQFXKR51roICQognvqpPe1auNERdLzAdcn+NoHEQB7 eyPqjQM3OGGq0SVRwNnv777o+Kd8Ncv/4fR1xvA20Ds94G5vCYpHB6J+lPPVXBmz rOYSf+QZWsXjAZdnAAYkpEjfJhNrqvqSoRxZ0dweCqieenm8Nzt/vdL9nT3+4AGy 5hmaAG2ENj5AhI194gtgACvKwCl5hF0VKMhtm5d9SWS+1quHzgn3UFh3VZrfjPid CR64mIu3RpZe7EcR+lMl7gCJxdFlHVD3z1lbz2V6u+xH4ZsLrTY+v8kDxzY8ojM/ zDbnlEK+xzA9akhlaD3D3wKXRVuSlrxfEVv14mwKN5AYHN7bLL3bjOo9WYtLznH6 Av4GqXSQ+LOl0+6bLKmD68/N0q2IiZwUSOsxTE1fUdYPF8eiN8L+35Qt0jwybieU a3JYtmO8EW4ZEmjJGwKgyrf+eigJN2/0AeBwcJyUw1YfzaqqS35NNyn5eKANyFQ2 ZhIjuXRyBOoUMBAx2TSm7FGeFOIw+aQgap6HuGbZ0EZBz6hr9ogNC9FVXCPENKo+ GdTGoIEs0n6gGOPP5ssp7xUK3420AM3HEACSmYaNC1Gfq2d81fI0TBJ9ATCRPo14 MjJGiWaFaXoVp/lQeOvlX2JyBG2I6fhMGPGKntCfX+/MERLNAiahQgOjvnOCQdlL hbq+6loQ1eSTX2AXpRlQpvyxLuebbM+HX3N/9mqAksgQdljmqoJQbiE/HqXqjmKe 16ylU3Rjabyc2p/31p7hm0IJ/3yqDsM06FUBJ108SALQyVvKqRA6q1t/Odb3xgt2 isbCEgvhJ8kYz3LQkvTW75rSa1cM53Udd1rbyo1t0PaOSGeUZw73/nY1+6LtUEg7 Q0x4ohL1UE7z7+14mAtn4OvGDuZJil7Lf4cPszf0SFoHPs8iUFpSorBwn3u+5ZXW NYFblPU2WK3O52qZqsjuQI/gK7uQhXjJO5nA5M8Yv7bVrbLMOj64hdOpNbd56Ycc qwYbHZL3WyRAN7TNg5ZlHgIVac22StawjXiHWDGaAXpCaHJn8ryM3LY+LTz16R2M bi+HVaw+0fY9f/mIcOdT6AyDg+V200GkGXL6aw0LZkBZmDin+OMmL7AS8TZ4dvZt zj+sykcT8DsaFj5Au6zHJoCnsuShMquHOA/vcUkhoe8/E2Y2QdiX7zwDM8vFM8tX DujFLNPIZuItcVEpE3ysFV2ZfVgBXoxTlZUQxdgJBQ0zg6Ez7rDYEAhVqo2gY9sk XtN80X/unsjGSbkCDQRf7uWiARAA3i7pu8/QvukeIBoIk1V0GHGPjX+GeV3fR4fu ciYgx+NKTXT/oJ/89KVeetT4CSnGEZcEpAvsBL3hsiblJYyLVmeoCniFlU+rMem4 zYP2PnEX70Q56d6SjBArs3K1FZK25S5qqv5ceM10NVRwPufV1RIuui6mQLm2ZwlY JyyANZZXMrHMJdaHpK9mMBSSF42MFQZhcauQCrhMhcpmZKn0D2+PpRveYwSr43Qi qBWR2INTDmj/V3ERMviE7vLajWQcmDdcrBp4u3miAJcJSn3XR5SiuL5W77jFEzgJ zR8yTC4hWE60nWJOk8UrEbpLyr7mBE0Tr7+1IBMgVXh8WHyzLE2ENREFvtp8KlSS y47Ky9n+5aqPI4M7epMNwU/ZGQnC8o3yX0zZL1tKq0fTAw1Ly4NGE1gRbmzrQcCh qUHg/J4KFYBMg8eCAzuPp4CRk8wUzu4fRWrOraoz/7bvhH8ilgPu1teLLKzDdOdx QAaiz/nGy00ICNbYqifR5m73K/rDdjtIqgsMp9Az0mEpgVNq8SPzM5grqAnP/iww QxwFftiXq/pEP2d8rn65e8NikN42Q28PH1D/uBYnOuVdZUvjU9wwywmfyr+NZMaH X9sN8R3Kk990W9VxwdOTITpAjz0qMtpE7i/GwPEtpZPTIfl54+cVKvyUjBuTXkWn vXN+6MkAEQEAAYkEcgQYAQgAJhYhBH4ckayAMKWlnR76uXUPPIdyPkASBQJf7uWi AhsCBQkD60WAAkAJEHUPPIdyPkASwXQgBBkBCAAdFiEEBjEqvVaiYb6sKxATk1aQ aqvQi4MFAl/u5aIACgkQk1aQaqvQi4P2Mg/9FXfsIZAgPN/Dq95y1fHG8jsPXEoY VNY1codxxAaNqvBXZkfJbFwSYpLY3xIbyxHuGuOtC9NpIy9M1+PR7MsxtZAvSjP+ flP/12x+6nP2H3NWOICpsY1tNOnQe2SjKJxZXHFnDqDBgKpv3QfKUHmYEdExJe3p NQrjZAgmdbEHeoj+P2VV5vqRrJoqNV/pUbM9czfEHeMVMm/mwWNOi/paCh1y/PxZ Mkj2bqLMRFfML9O/7QOJRxu3wQwl6jJHj4o6CHks6t237FSB+qZhhQP+vR2CZl5w lQ4trw0wpNgbZRIMlU3tUfFQ+KdFsM7UqwzwrVgWFur5r7KrFzJN88EKSplrIY0q se6S5b58H7Tw1jtfjb/xF6jQz5aoZ9xemd8roLReRpKPq70o2eIP1HkjCtqmd5Xc RQaVEUvlv34WZQ5w2eA1bEBESjbrKhX+H0Un0msUS0JpnpegRNZqW3Bedeos0usy MsfqMYmZEcZb3hw51XnSb8B/WhkSmcoEuECRxeCu1tw0pn7o4GemAeqT5ng8LXeE RJhrUTlCIyRab8TIQZvmf6XjneT0stZLKCoZUXO+7FH7F7nPsew1dU+WFIauQX71 PkZp2JMT7W57HKPuEillF8v5+H1k9Jq/2k+ZdgmT1Gd27nALBOc7q8rr00Lf6BU3 K+XsfWo+p08CXKudfQ/+JFzzpyKeX5nVqiqbxqUakPy/Ot010/7457YVpvcLmcvT Yn4cR0dottl96lp5wT1jN7VXfZu/tsHEtTg1ofeExNuCL8DZVsSN836idRmObhLP dnYmThZcXBJ3RgSniQNwvuuGUtpH7OXb5vnAOe42+n3yucxhPI9Gzo5g6fTqWwb+ qwh39ydxtiv3v3jgFixJLj/HH3MsxTm6cNUTWNLzvX+HugBeuOfyDG9++fe3UmZe MczAF9N9tDFP+0b1diXywJWfSdVLBmMARYeh0Swjud60SQLTqaqXVfPSECGo9LVc wot2u4q67QhUC2OTKiTkF6QVE05iKoPEPkCTmMvSpbHF3ERZE3J6YsVg17Uc7LrZ 7DRRF+03mu4njS8LvIoeBuqsB96mNQNH/PwLSANWTtclCwj2C9W1HKy3zKjnu3kC PHLzwQFEO28TE5EsblnBdA8ozNIV887V7yw89MxPhpuXRn8BVAU1S9Dj7j3mNHLj rVAgZmr/nx3oDt8VfOZpK8u3u1voZdC+cnTBdcG2gzM8Ya+h8C60Y8dFzykr8hr4 b5gDeDI1OkQ2vOQHtnQPdscYKl0v1ntHq2wrFuCIol4WneKh3Jrvdb37cL971u4g dpw0jTO/ykCvLlipxjJ/NrnXFb6TriZRgWZqiIwY2lKEfZDXqc/iOa2L0yBr21a5 Ag0EX+7luwEQAM/CQdinTzIHaEJsCe42g6tt4dBC/UC4wD367rJcyJbEd+qaLJwS CQUbg/wrEdRT+aROHVKLwrvXxtgJs0x15vvFTurkn1BnNMh7p8woYwip7PKrNn2+ 96Yg7Aqc3a3gkDQeF8Q7uipOH/5feJh6l7Iu718pvnDUw4UFZt/RUrdqseFXVwr/ ffSalLx7gJhL3mYuU1qpJZxsonNwAS43eViagI0FHSqixB5kPgFcbBf3BIiisOCy a1L9a+zSt1y1aEFC7m+9YlGJA3C0/X8s+dK0VWOrJlP/WmKUp3Epxpu6srsBItcT YMuGA82/03YAJ+jpGMRb+X1Dq9vuOUxvDjG+G10Cgew2EjiAkXpVg/1NsCrQWRbs KtFf5PXGfKCO0i8hEzwmJLd5OlNIIiup450iX4eS77Tey69hGyweLIC4YDPDwFpp bkDdRG6nDvePbEHi5z1L41NaWNa0wEyh28OqrmD0FCcGukk24pBVemVEx0En4siQ la6/1QXQlG/wTi7Yi71V/4oz7iZ4lSPWs0ACFGD9W5InlRykiRXC1cV27f+qMw9u Y6UbgvN70cWflK5C7e2h/eAQfxj+seYFUjMnJTkXiZE85m63p1Yu2A1c9+jqJ0L3 Lfn5YIQdtWdY3Qc1RIQYPVRl5NcgXIPV7TwjvnjowuHjWX0IQbhv61lNABEBAAGJ BHIEGAEIACYWIQR+HJGsgDClpZ0e+rl1DzyHcj5AEgUCX+7luwIbAgUJA+tFgAJA CRB1DzyHcj5AEsF0IAQZAQgAHRYhBOJesM8c6ASdR/HZpjPhDkoYOo5GBQJf7uW7 AAoJEDPhDkoYOo5GhpcQALowCpZ8UowMWlQFfZ2ySJalnZM6S2RxCFiss4W9pGuu 9PKuN2wdXW3HGkBGDAuQgLwanSfhGSt/urT3+DT40OlDMzanRwEK0qiSaSs/xBtK dNL7JmGbcWTXpNP3aHhfYhVOg7NJnsfZ8Ti3dfuv3ZrjcLvgdnZ/s6O9S3gU8DtH fpnOfE3hxjUEHEw9hs9Otc6foCqMDZDvfU3emYduD5AvTiXYdeD/mZBD4OmF99II XWNuQexAJ+xgOPdvXaYt0lBuXmfMcn/1hrU3RJqguwnPZ2cU5zo41/uSbdsFrTHK yEOLTn0XYYk07mZGdscljzmXbpsbAC4Jp8CDBhUfdzfi1n3AOyblk1nywfionLlz HDtfWQYCxp16N8S2MU7tA1w8rFNwVDVwmxIfgjLrjPAgvqSpCmLHTXNBfdLUYRAv SpY9TR+U4YOOuEx2Niwnprdjm1qilN+fmPR3tWvVChlD3kHmSpi1+9ix+xizlBjN eZ08Eq5rDBPsTpqJmoNS8pHE0EL3IVpcB1pZ5rd6UBSa7LoMLeWwWm7Ap5VZALfp jMNws4SA2q5OTRY2or/+m1+cfDWIP+2XQV4YaNFMbO7XKr3vnUOxY9gyADqfRJiv DljHiw5iLzbkaHs7dYJOPNMGMlRzZfkkxg6Patx44TQ2rO7LnyCgVdFZWDHNevgR Z8AP/152xfh3qsOnT+R32Rt8CcwXmKFxLylgpjegcUmbutow9zdlX26qZ67cJ/3p hNLZgAYKPrGecGA0BJ2UzsPEKKz8I/dAp96LpHo/24WqUamh1z2PRAgyJGC43zm0 rA/KAlcht8bbI/VuZ5eAYXjH01QfPS7i7fFOryYYFqfH+BTp3ZEr/A7FkcOZXmNV Gg4+oC2t6cJnzDsM0MUJ7dgNAHTLGx6RZZahdE3LJ8oVJ8Vek9KtjJbPr143EZLt ymkiy93pzLUaKWfCZJCCI9nfJnNZnvoQXv0l3wnrQIFE14Fv0jbTALHRgRJlB4cZ i3teEuf7shSDsd13JDdfmxMsxnfeVsIUPa+J0GBSbe14JHXlcd0t03cpbzO547Qb rFpD98XO6Y7OefWD3pwDF2Izjnn4Cny/hpUIEO1A2j4qHhUkqmnFmBO6yIFic637 CJnYe3uU7ss/TNIUKLhujqlcNl8WeOMVPbhnCuOhyQh2aioAKn1yiQ1EgNSIGIVD LwqMt0kxI52/aDkZgCcEfBFC1c17IeUH+G0HMGm49/acFHkhX61S4efXhvzH5J0l Dr+0qk4aVKNwqkUNp56GSMLhiiSYivX9Xa4qQGNlmrki1pC2DamlTXDLB67XQcRp dAc+4nNTK4E/czrr0+wlkgz7pC1MAllCLilyTSPGnKIPlOd2uQINBF/u5d0BEADF +6hDuKvzbmKWZNXjJK6Em/5nnzBOa155YQLN91zMs6COI4p+YuIVPPzVWZYR0yHs gTWw45cMV+RYwuL/P+1Z84bgOyPloIVF9VQjOC+wB3Gn4qmTzobr6q+UfQVvUiUQ 8fGG11teWvYpWiG91uialjHZmrpAOQxjHRxHPpi0cZtTFEqinCIy6c942xbtZnzf nzPpxkKl0a8s1eKZ0KlDK6Ab59nxAinilohXRg/U6sqypsyLl41L0qMZek5dEt4C r3spdSkZgxqJpLTqQy/5VB4pcfEaIaank3sLxhpil/oQiq+38WA0VkICQyeiCsvf eEKyt1C6COBNH+olegUxudTKDHFthyGMPRz3McI5jHxCyru0mfLJag2hHXzgGoaD VkYIwkvyVsHWDqrZMMXcCIUVlpphxtHo1M32AATnWFe4K1nFdbejR9XC5xWOgwbT zCblqporHzU0c8WBbfJ0Y10IDrHsa/F08PkFvVN48Ydik6rcwowSPxP+59Q9AKLh Isd2hzfWU2zAbG5Ph1wecwlYR3tp/0i3uSTDXfuuaY+vrqpoECN6fnSg8NxiBbjU JR0Ju6KDM2SeBUz5hp9BzL8+OPTogRZoinxBogrRAvdGLOnLG5hMjBezzF8UEvp6 IMisGHBZgXoX4Juvf78RE8JOwHa+HUejj5kYiQW6TwARAQABiQRyBBgBCAAmFiEE fhyRrIAwpaWdHvq5dQ88h3I+QBIFAl/u5d0CGwIFCQPrRYACQAkQdQ88h3I+QBLB dCAEGQEIAB0WIQT2AU9wN9W7TuO6I3E56nu98JFFWwUCX+7l3QAKCRA56nu98JFF W5whD/9Hu5cnJ0hnzqk3MQsdMXbTNLsv+KePV71kcMRat4hjw2Li/TUaC8xtA81d O/1obmsuoDAgv82KlQ7DLDXjFk2q45lJdgZxAkN3dEoYakdTIEi11FvwbhV+qxZK jTq3jFQho4i3GDLgrvBMG4B1TGMH0IPux9fmBGpxYKmp1GjhpgoMXp9bqzsV/mPZ TxPlmIpeJEO2jeCWKhHHw6rzwGjF68G3HiJ0TqvjdCtcNrwd3GTDsdEJtUl49aqF M7VfoqKjVdRO/YDL//+TJNOYz5EBGjIZxbhgZJ9Qz+geSBx9GJtDWdq193ofFi39 oleTFnEMj+OeIr1Bc2pc8Z3HJttFknicJDkeze3mM0CZAkhVkLFy6DvAQkXrgvfp AUYFACQW8E2XmRBiKd4huojWYz5QGSEIk2fYRVhse2HAUZ9gTODSX2L13nls+BEi sArsmSFA/RQslDXW+Jl+P0e37BzN51uk2Dg4ylJUBgcpTRUn4Q8c1DgHDhkEVnBI ny2H/MFuhImw9g5xqlBfCEKh5D8D0e4fX28MhSsBlOCeIKJoY85U3GNY0tlIwAt8 M7IIHe1n1qncPbAMmq0K48J1lfyTEbXpnSfArzEdbnosjBUaiQX5EwA656eZ6wb3 Vq02UDei6KPuOosl4Voy+Ffq5MCkanVMA97/0wV3CeCvQYGbsvsUD/9fLYc3yH7A 0xksK7PImztDR8MLsUPoiv/vnfZ+WJJ+YJ0TKAHm1ZO3NqeZmD7XoWHKwh83zsK8 x/JUASCBN16isC+Ym6IwF83/HXJfKNvvotkr2WG6Dv8Vg1Hhk2Iv5y3EMbFa9rfv 6vjxho+0sYrraJH8qQAM08IIOi7+afrkR/ikgA8V7ymqmdxtMMHZqG+h5R0VGTVw QBxZ5/ZiY56Qn5UH2m0Tc2AHOcAQTvCEwyb19IPyhif+rek3npSvKtDc6WBJioyi gvDhl+jgIfcIo77w6GthgbFc9k68Je56Peu2J30zWj76Z+Di1OJhAj1wFr4/XT5o c1MB/Vfyx3hEPRDNz7dRaDqoVnYVdoI0blyCiSkD9I4/axb4X3xN2SK4XA/zv+Lb 1FbCM1XFL2aF+09tk+77EVdWsBmQpOArD0d54E1YulBGaxVm5QKfov23KiqHIFVF 8WYqJqNJwbJRZii7klczkVm3wFte3NWK7HW8kfF147lv0z3AiZYnk0O6Mj1ip3R8 Qm5yiv57DbbgIMkSPWCpEtFGHIoK2msJ2bQcizh2WGxLos00RTx3IVAeSAS54+kr rMBg50wNczcGHKPDUKLwkYczgHonUtljAkeXnTl69rifChI+KpjHNtF6dFgC1aSt MOud6HhAcd0f3lmuPzCGGp4YOQx9tV139bkCDQRf7uX4ARAAxaybudQK4fMIzLiV grIzthhb3/DK83PNohTNMemM2V2z1Ij5Dlu2XNDypMdR0rKM/QI3zWud1+vd2h/l QZlg58FspvrY6I7hI+cbdRldVaAKDGQHo5Bi0a7BkonZvS/0wnNUPIhy/znzXtXR f4L7ePZMofH/2shz4TZ1yNpU8zaomY6eNjSc51P4vVxtDQ4QofQeJEn8aO9a4whu O0TVEAPKRYBRgjM8faDuUJtLfiC3OrhLg+B7JVSF3di4JITAyafPbZACLjV7Umxb SUL3qTJZVpIuhF0xQOCE+WRx3Xs7lkPdHMqP2OaJ8Y4ymR08cSfIP2XFKsQFtoqT VyMQgGgI6VXF8OfnCnGgx0Do1vJNoL0neFzVXpCPPzh1RbcrtndZWum/1R4egkYg J8TPQH5X391J58Uwd5l9/ZDdoSeeQYdtTR4YQ8//ATFO3hoSRvES4U6ZwO8LM6di ra6pqb6j0liT+DdcBwE4C1bGJMJ6d93S5SfH3llDIMJo7uJDbKILFMES9rg7S6I8 +SW75TjKUk4Y7L8R8qwURqEyuOOGfaQXirqvji4PdcGDBiIk2Oq69Ky6lmlJgyIH SZ7SO1JXk0yAJTXb+a6FJTLFxidkIZzu+LhLBn/MhAPjVyv3qCTQ7O0lu8Mfcqg5 8hhJ6IE79PBHS3z8ok+mFK0iGrcAEQEAAYkEcgQYAQgAJhYhBH4ckayAMKWlnR76 uXUPPIdyPkASBQJf7uX4AhsCBQkD60WAAkAJEHUPPIdyPkASwXQgBBkBCAAdFiEE JFV3TUL9/mucOD64/hACvFlwgR8FAl/u5fgACgkQ/hACvFlwgR+LoRAAgtIgaKb4 ZY8qoAFZeph+Syg+mMKfPJkBuGUedJl6IxbHBSg2mhnCjJ0bmdqxsAXgtcSUqmtZ Yw9NyoGgiVjs+gu5sQp1Oxc2/keQXaVksTkoXwdnf+2iXyp1WPeeLGySHmzuwy9c eExt+h0mVmBgFls2wNdFGPbVfiT3PvFkwqsnta6HebDTN4pMzvG1IIGV7L5KRo1E dmkrt3lXQWmdgHl3JoNQ9v/Jgf4jo6gDw53YvJFKJcaOOAS3d4CzPWmcLzcy4mf0 9YI3DoQCbYL3cRNelUwzUF2L6QyPCwonXemLCmfkBgsSVqvW4fq8qbEHGF2fK7x3 d7bZEsUiGCt/tXOkDkNJ31T/mC35nxZfcj8AMPixO+BnAeKeYC37LbQD76jrw526 tUXsAF+QON5DPeot+e8bIx9qSbvdqpXDkK4lGcRTuS2OVC8J9XfDTch4wm3Kd4P4 lDdRAJWnLfVay0m05LGlekWdEzcjP8KDaICH9rEs6f9e1gy6mTEBnBW//41BxELT KxoTGlcX3yEhCmK36g5C/+d6b7Ji5arGGTCa96v/xG32KYc1zfn3TYkCx06pPUbz iAl2l0MTpGeqz2hJMOGA3JuxwlksJKqnPYy0hHKdVW4Pnn25NeXcBp8wpkt8VZOR bzjw/TJB7qvJHoRo1tat85Uij9rAXqTyO8Ea0hAAi/EfuiDDy3GV7bvjFSA1XEjL d+F40g2X0QG/PHTScYB4rFJwV0GFUxLHr4g7iypAVI+BB4EYikx8gpee6B0g3J+r aCFDDrRPDKdqrpZK53oYcBPkdSBbCr5MAa/M3DerKBEgoBVUbaSHWN7OH2ae+5R6 X2ERmYZdW4PCj6lw7a+RhkAsgKo8RjonjV61ehQPZh20noI19Q80BYYSCfHHvzy5 vwvByhmTMJNrl3PDpBy9/TwBR5DpnHfOPJX6bnl3pdu65F2TRM6yoFbfoUiEqrXV 4wC1I++N9VjrQvXSp0ik/XaMWq87wLIg+1owElJIzwyZWukQkZMAYtesVFz20YwC 7Nu8SNr/NTSCH1EqLsS4YhBTsjpc2T8AqUlgxKrilmLbrj64PXgMsQ9WYm5zwlC5 UA5eky5YhETFJ25dIaplMm47aIbPSH5f9y5eYPkfOCoMu5oDzDzoXdH9V1YfsHqa 8bboSgTdariC23x38E9PaWQNyY2MFKL6cFt2ilIsMSSD6JAm1x8kBtn1bBopG588 7mTDtlqHCw/QrTuLreJG9KJ1dQFJ/Q42+csH09l081wlv4BBuVlN1Xmj+c2sWn90 l1BPZfYHd9jhggI96yTZhfTfFbSMSuGPQyqHnwDYdA3cNj5BYievBkO5FZaCe9SZ 4xcYgqlVpv15O7VrD+I= =Uugw -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/202122pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":["ISC","Company News"],"contents":" Each quarter we like to profile one of our engineers, to acquaint our customers and users with some of the people they may interact with as they use our software. This time we\u0026rsquo;d like to introduce you to our newest staff member, Nicki Křı́žek!\nNicki joined our BIND 9 quality assurance team on May 1, so they\u0026rsquo;re still trying to get their head around everything at ISC. But they\u0026rsquo;ve been happy to see that some of the open source tools they\u0026rsquo;ve contributed to in the past are being utilized for BIND 9 testing.\nNicki comes to us from CZ.NIC, where they contributed to many DNS-related projects such as Knot Resolver, DNS Shotgun, and respdiff. Before that, they worked for Red Hat, which they say was \u0026ldquo;a great place to learn how the open source model can work commercially and to see how open source communities operate.\u0026rdquo; Nicki is a big proponent of free and open source software, and \u0026ldquo;by contributing to it professionally, I can really put my money where my mouth is.\u0026rdquo;\nThey have done a lot of DNS benchmarking that focused on the newer DNS-over-TLS and DNS-over-HTTPS protocols. They explain, \u0026ldquo;The switch from the stateless UDP to the stateful TCP has a lot of performance implications and I\u0026rsquo;ve spent quite a bit of time trying to understand them. It eventually led to the development of DNS Shotgun, which can be used for some really advanced benchmarks that excel at simulating real-world DoT and DoH traffic.\u0026rdquo;\nNicki is pleased to join ISC and looks forward to further improving the tools they\u0026rsquo;ve worked on in the past to better suit ISC\u0026rsquo;s use cases. They hope to come up with new tools that could could improve the overall quality of BIND 9 and DNS software in general. They add, \u0026ldquo;I see contributing to free and open source software as a way to give something back to the community, since I rely on the work of many other FOSS contributors in the software I use every day.\u0026rdquo;\nThey recognize that the most challenging part of company-backed open source development is the business model, and they acknowledge that convincing people to pay for software that they can just run for free is difficult. As they say, \u0026ldquo;Providing the added value through support and documentation is critical to convince customers to actually pay for continued development. ISC seems to have this model just right.\u0026rdquo;\nNicki holds a master\u0026rsquo;s degree in Information Technology from the Technical University of Liberec (Czechia). And they use Arch btw.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. As always, we welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-krizek/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Nicki Křı́žek!"},{"categories":["BIND"],"contents":"We recently dealt with an interesting case of a fleet of BIND 9.16.27 instances running on FreeBSD 12.3. It was reported to us that over a period of roughly a few days, named processes were exhausting all available memory on the hosts, causing them to start swapping memory to disk. The interesting part was that the servers in question were neither performing recursion (so the ever-increasing memory footprint could not be attributed to cache bloat) nor subject to a particularly significant query load. This blog post describes the steps we took to determine the root cause of this issue and what we found out during that journey.\nOff We Go We started the investigation by checking whether the increase in memory use was tracked by BIND 9\u0026rsquo;s internal memory contexts. These have the capability to record every allocation made by our own code (down to file:line tuples). We could use that capability to periodically collect dumps of allocation data and then look for differences between them, which would probably allow us to at least determine the part of the code to take a closer look at. Unfortunately, in this case the graph of total memory use tracked by libisc memory contexts turned out to plateau after an initial build-up. This meant that the problem lay either in the code of one of the external libraries that the BIND 9 code uses or in the way BIND 9 interacts with these libraries.\nSince we did not get similar reports for Linux environments, our first suspicion was that the root cause was a BIND 9 bug caused by a missing destructor for some synchronization primitive. One important distinction between libthr (FreeBSD\u0026rsquo;s POSIX threads library) and other implementations such as glibc\u0026rsquo;s pthreads is that libthr allocates its primitives (mutexes, condition variables, etc.) on the heap. This means that failing to invoke a destructor for some primitive (e.g. pthread_mutex_destroy()) at the end of its intended lifetime causes memory allocated for that primitive to leak. This may ultimately lead to an out-of-memory condition, particularly if the problem exists on a hot path. This was exactly what was happening in the case of a bug we fixed recently. However, that particular problem was already addressed in the version of BIND 9 running on the affected servers, so we had to look for other memory leaks of this type in the environment in question to ensure we had not missed anything.\nQuick experiments with Valgrind did not uncover anything suspicious. Since using Valgrind incurs a significant performance penalty, we also tried finding memory leaks using a work-in-progress method of using jemalloc\u0026rsquo;s leak-checking capabilities on named. The idea was that memory could be leaking due to some specific ordering of events, which never happened when named was running under Valgrind due to the latter\u0026rsquo;s significant overhead. This was slightly tricky to implement, for two reasons:\nWhile jemalloc is the system allocator on FreeBSD, the default system build does not include support for profiling (--enable-prof).\nAutoconf-based build-time detection of jemalloc is available only in BIND 9.18+.\nTo work around these obstacles, we built jemalloc from source with profiling enabled and used LD_PRELOAD to make the runtime linker use our custom profiling-enabled jemalloc build for memory management, instead of the one baked into FreeBSD\u0026rsquo;s libc. This allowed us to generate call graphs of the functions responsible for creating all outstanding (not free()d) memory allocations at named exit time. Unfortunately, analyzing these graphs did not uncover a promising path forward, either: all reported memory allocations stemmed from early named initialization and therefore could not contribute to a memory leak which was growing over time.\nHoming In While conducting the seemingly fruitless experiments with a profiling-enabled jemalloc build, we noticed one peculiarity: a ps auxw loop launched in the background was occasionally reporting a weird sequence of memory-use statistics:\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND Thu Mar 31 14:29:53 UTC 2022 named 76113 0.0 1.6 3722832 1074476 0 I+ 13:31 0:16.89 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:29:58 UTC 2022 named 76113 0.0 1.6 3722832 1074476 0 I+ 13:31 0:16.89 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:30:03 UTC 2022 named 76113 28.2 6.4 19557584 4273736 0 I+ 13:31 0:20.24 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:30:08 UTC 2022 named 76113 23.2 1.6 3844268 1100952 0 I+ 13:31 0:21.65 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:30:13 UTC 2022 named 76113 12.5 1.6 3844268 1101004 0 I+ 13:31 0:21.65 /usr/local/sbin/named -g -c /var/named/named.conf Around 14:30:03, over a period of just a few seconds, VSZ jumped from about 3.5 GB to about 18.5 GB, RSS jumped from about 1 GB to about 4 GB, and then both values dropped down to almost their previous values; yet memory use in the \u0026ldquo;new stable state\u0026rdquo; was visibly higher than before that event.\nLooking closer at the ps output history, we quickly noticed that such events were happening at 15-minute intervals, which we tracked down to a cron job collecting named statistics from http://localhost:8053/json. We ran the same ps output collection process without LD_PRELOAD affecting named (i.e. using the stock OS allocator) and the periodic jumps in memory use were still happening:\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND Thu Mar 31 14:44:57 UTC 2022 named 5831 0.0 0.6 616092 417880 0 I+ 14:36 0:00.72 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:44:59 UTC 2022 named 5831 0.0 0.6 616092 417880 0 I+ 14:36 0:00.72 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:45:01 UTC 2022 named 5831 2.2 0.8 711020 503780 0 I+ 14:36 0:00.96 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:45:03 UTC 2022 named 5831 1.5 0.8 711020 503780 0 I+ 14:36 0:00.96 /usr/local/sbin/named -g -c /var/named/named.conf Thu Mar 31 14:45:05 UTC 2022 named 5831 1.1 0.8 711020 503780 0 I+ 14:36 0:00.96 /usr/local/sbin/named -g -c /var/named/named.conf So what transpired during the statistics collection process that was causing such shifts in memory use? Flame graphs to the rescue!\nWe used a slightly tweaked dtrace invocation suggested by Brendan Gregg for collecting on-CPU stack traces for a process of choice:\ndtrace -x ustackframes=100 -n 'profile-1000 /execname == \u0026quot;named\u0026quot; \u0026amp;\u0026amp; arg1/ { @[ustack()] = count(); } tick-2s { exit(0); }' -o out.stacks We ran the above command while a curl fetch collecting JSON statistics was running. The resulting out.stacks file was post-processed using stackcollapse.pl and passed to flamegraph.pl, which produced the following image:\nThis immediately pointed at isc_taskmgr_renderjson() as the primary suspect. That function was present in 40% of all named stack traces collected during the profiling period.\nAnother thing that drew our attention was the huge presence of functions related to memory management in the flame graph. Freeing a JSON object taking 15% of all CPU time assigned to named? Something had to be wrong there. Before moving forward, we decided to simplify the graph a bit by removing shared object names from the labels, cutting off the stack traces at all libc functions related to memory management, and marking such cuts with contrasting colors:\nThis emphasized how omnipresent memory management functions were in the profile: in total, about 30% of all CPU time assigned to named during the profiling period was spent in either malloc(), calloc(), or free(). That is a lot of time to spend purely on memory management.\n-ETOOMUCHSTATISTICS We resumed our investigations by going back to isc_taskmgr_renderjson(), the function responsible for producing task manager statistics. A quick glance at the JSON object returned by named on a host with 32 logical CPUs confirmed that the task manager statistics were quite huge - and indeed by far the largest component of the statistical data:\n$ curl -s http://localhost:8053/json | jq -c | wc -c 4106028 $ curl -s http://localhost:8053/json | jq -c \u0026#34;.taskmgr\u0026#34; | wc -c 3599300 Task manager statistics are nothing else than a list of task objects, each of which has just a few integer/string fields attached to it. So\u0026hellip; what is using so much space? Exactly how many tasks are there in a named instance running on a 32-CPU machine?\n$ curl -s http://localhost:8053/json | jq \u0026#34;.taskmgr.tasks | length\u0026#34; 36897 Whoa, that\u0026rsquo;s a lot of tasks. Since each of them has a name which describes its purpose and therefore also identifies the piece of code which created it, we took a look at which types of tasks were the most prevalent ones:\n$ curl -s http://localhost:8053/json | jq \u0026#34;.taskmgr.tasks[].name\u0026#34; | sed -E \u0026#34;s|res[0-9]+|res\u0026lt;number\u0026gt;|\u0026#34; | sort | uniq -c | sort -n 1 \u0026#34;ntatable\u0026#34; 1 \u0026#34;server\u0026#34; 1 \u0026#34;zmgr\u0026#34; 2 \u0026#34;ADB\u0026#34; 2 \u0026#34;cache_dbtask\u0026#34; 2 \u0026#34;resolver_task\u0026#34; 2 \u0026#34;statchannel\u0026#34; 2 \u0026#34;view\u0026#34; 10 \u0026#34;loadzone\u0026#34; 10 \u0026#34;zone\u0026#34; 2048 null 2048 \u0026#34;res\u0026lt;number\u0026gt;\u0026#34; 32768 \u0026#34;udpdispatch\u0026#34; The high count of udpdispatch tasks (which scales linearly with the number of CPUs available on the host) obviously stands out. It is a known issue in BIND 9.16 and older versions; in BIND 9.18, the dispatch code was reworked from the ground up. One of the results of that huge refactoring effort was dropping dedicated udpdispatch tasks altogether, as their high counts were not translating to improved performance in any way. Let\u0026rsquo;s estimate the amount of memory-related work required to produce JSON statistics on a 32-CPU host: each of the ~37,000 task objects included in JSON statistics has three string fields and three integer fields. Each of these fields is a JSON object, so a quick back-of-the-envelope calculation suggests that there are at least 200,000 memory allocations happening for every JSON statistics fetch on a 32-CPU machine. That explained why a profiling-enabled jemalloc build needed tons of space to store the stack trace for every single allocation. But that\u0026rsquo;s not all - after being allocated, the primitive objects then have to be put together into one large object, which then in turn needs to be serialized. And finally, all of those objects have to be freed, which also does not happen instantly.\nThe simplest workaround for this problem was not to use the \u0026ldquo;catch-all\u0026rdquo; /json URL, instead relying on other URLs allowing more granular access to JSON statistics (to avoid generating task manager statistics altogether). We could end the story here, but \u0026ldquo;just do not use this feature\u0026rdquo; is not the best response to give to a user reporting a problem. Also, if the JSON objects are properly destroyed after they are sent to the client, how could this be the root cause of a long-term problem?\nPop the Hood! While some of the techniques we used to investigate the problem up to this point (e.g. building a library from source) may not be possible to employ in every production environment, all of them could at least be applied to a live named instance serving real traffic. To get to the bottom of what was happening, we needed to be able to reproduce the problem in an environment allowing free experimentation (using debuggers, experimental code changes, etc.).\nInterestingly enough, we quickly managed to reproduce the problem in a FreeBSD 12.3 QEMU VM with 32 virtual CPU cores, but not on Linux (where BIND 9.16 binaries were forcibly linked with jemalloc using a configure.ac hack), not even when we forced use of the same build-time options on Linux as for the default FreeBSD system build. The behavior we observed on Linux was that as named processed numerous requests for JSON statistics, its memory use increased over time, but not as quickly as on FreeBSD and, most importantly, not indefinitely.\nTo examine more detailed data than just ps output, we added malloc_stats_print() calls to named and tried to determine which counters reflected the ps-reported VSZ/RSS values. It quickly turned out that the number of dirty pages for a specific jemalloc arena was correlated with the values reported by ps. Over time, the value of that counter was increasing in a seemingly unbounded manner on FreeBSD, but not on Linux.\nEven without knowing what was happening behind the scenes, this discovery was substantial because it led us to the first practically feasible solution for the problem. Our line of thinking was: if the infinite memory use growth is caused by unbounded amassing of dirty pages, can we somehow prevent it from happening? Browsing the jemalloc documentation, we came across the dirty_decay_ms option, which looked promising:\nApproximate time in milliseconds from the creation of a set of unused dirty pages until an equivalent set of unused dirty pages is purged (\u0026hellip;) and/or reused. Dirty pages are defined as previously having been potentially written to by the application, and therefore consuming physical memory, yet having no current use. (\u0026hellip;) A decay time of 0 causes all unused dirty pages to be purged immediately upon creation. (emphasis added)\n\u0026ldquo;Job Done, off to the Pub\u0026rdquo;? We confirmed that starting named with the MALLOC_CONF environment variable set to dirty_decay_ms:0 causes memory use to remain stable. This solution could be successfully deployed with the stock FreeBSD system allocator and precompiled BIND 9 binaries - no patching was required anywhere. It looked elegant, but\u0026hellip; if it was really as gorgeous as our complacent selves thought it was, why would dirty_decay_ms not be set to 0 by default by jemalloc itself?\nThe catch is that not retaining any dirty pages (in other words, immediately returning all dirty pages to the operating system upon their creation) hinders performance because it prevents dirty pages from being recycled for subsequent allocations, increasing the allocator\u0026rsquo;s system call footprint. So while this fix was acceptable in our specific use case, it was not necessarily feasible in the general case (e.g. under more significant load). That conclusion posed a different question: why were the dirty pages in named not purged over time, as promised by jemalloc\u0026rsquo;s documentation? A bug like this just had to have been triggered somewhere, somehow, by someone before, right?\nKeep It Simple, Stupid Since named is a non-trivial piece of software, we wanted to strip the problem down to the bare basics, hoping to be able to better understand jemalloc\u0026rsquo;s behavior in a simpler application. To that end, we wrote a C program which naïvely mimics what named does in isc_taskmgr_renderjson(): each time a signal is received, one of the threads spawned by the program creates a JSON object comprised of tens of thousands of other JSON objects, serializes that object, and then frees it; jemalloc statistics are output after every iteration.\nUnfortunately, this program was not able to trigger infinite memory use growth on any operating system we tried it on. This meant that there was some difference between the way jemalloc behaved for this program and for named. Following this lead, we once again resorted to comparing malloc_stats_print() outputs for our sample application and for named. Both on Linux and FreeBSD, one difference became striking after a few dozen iterations. This is a sample excerpt from malloc_stats_print() statistics for a single jemalloc arena in named running on Linux:\ndecaying: time npages sweeps madvises purged dirty: 10000 17625 0 0 0 muzzy: 10000 0 0 0 0 and this is a corresponding sample from our attempt at a reproducer:\ndecaying: time npages sweeps madvises purged dirty: 10000 10715 17 53 19299 muzzy: 10000 27 7 17 9542 As you can see, in our sample program, dirty pages were clearly being purged over time, as expected. But in named, even on Linux, where the dirty page count was not growing indefinitely, it seemed that jemalloc was not returning any dirty pages to the operating system.\nrr, or There and Back Again To learn what jemalloc code paths are taken when dirty pages are purged, we captured an rr recording of our sample program and used a data watchpoint set on the purged counter, combined with the power of reverse execution, to understand the logic behind the process. That allowed us to determine that in jemalloc 5.1.0 (which is the version used in FreeBSD 12.3), in order for arena decay (i.e. dirty page purging) to be triggered, a so-called \u0026ldquo;ticker\u0026rdquo; has to fire. A ticker is a simple construct that jemalloc uses to ensure that certain actions are taken \u0026ldquo;once every N times\u0026rdquo; (e.g. \u0026ldquo;try purging some dirty pages once every 1000 (de)allocations\u0026rdquo;). Apparently these tickers were not firing during JSON statistics generation in named. Why? rr to the rescue again!\nTo our astonishment, while we were capturing an rr recording of named on Linux, the counter of dirty pages started significantly growing with each curl request, just like it originally did on FreeBSD! Hooray? We had no idea why that happened, but we first focused on figuring out why the tickers were not firing - fortunately, the number of purged dirty pages remained at 0 even under rr. After all, with a captured rr recording of what looked like a successful reproduction of the original issue, we could just get back to exploring that angle later on (which we did).\nGoing back and forth through both rr recordings, we ultimately reached a surprising conclusion: some ticker structures were never initialized by jemalloc in named. Further analysis of the code paths involved led us to an explanation of that phenomenon: the threads that did not have their ticker structures initialized were the ones which called free() before issuing any *lloc() call. This can be the case with e.g. named\u0026rsquo;s worker threads: they loop over event queues, executing callbacks as events are posted to them, and it is common for an event callback to free the event structure associated with it. That happens e.g. during named startup, when the so-called task-exclusive mode (which, by the way, is also being removed from BIND 9) is entered early in the process in order to load DNS zones; doing that involves pausing most worker threads, which the network manager achieves by posting \u0026ldquo;pause\u0026rdquo; events to worker threads\u0026rsquo; event queues. Each worker thread then processes the \u0026ldquo;pause\u0026rdquo; event and frees it afterward.\nDrumroll\u0026hellip; In short, we learned that if a thread starts with a free(), it becomes incapable of ever triggering dirty-page purging during its lifetime. Note that purging (returning dirty pages to the operating system) is not the same thing as recycling (reusing dirty pages for subsequent allocations before returning them to the operating system). On Linux, dirty-page recycling was efficient enough to keep memory use in check in the long term. On FreeBSD, it was not (more on that below). However, jemalloc was affected by the broken dirty-page purging issue on both of these systems.\nTo confirm our findings, we tweaked our sample program slightly, so that each of its threads started with a free():\ndiff --git a/reproducer.c b/reproducer.c index 494d05e..666f9a1 100644 --- a/reproducer.c +++ b/reproducer.c @@ -14,8 +14,12 @@ #define THREADS_TO_CREATE 4 static bool done; +void *dummy; void generate_json(void) { +\tfree(dummy); +\tdummy = malloc(sizeof(dummy)); + json_object *root = json_object_new_object(); assert(root); @@ -108,6 +112,8 @@ int main(void) { fprintf(stderr, \u0026#34;PID: %d\\n\u0026#34;, getpid()); +\tdummy = malloc(sizeof(dummy)); + setup_threads(threads); setup_signals(); With that change in place, lo and behold, the malloc_stats_print() statistics reported by our sample program started looking like this:\ndecaying: time npages sweeps madvises purged dirty: 10000 11253 0 0 0 muzzy: 10000 0 0 0 0 This issue was reported to jemalloc developers, but not before we found out that it was inadvertently fixed in jemalloc\u0026rsquo;s dev branch by moving the ticker state to a different structure. However, that fix is not yet present in any numbered jemalloc release (the latest one being 5.2.1 as of this writing), though it will be included in the upcoming 5.3 version.\nKnowing that an upstream fix is in the pipeline is good, but it is not much consolation to all the existing deployments out there in the wild that are affected by this issue. To help with that, we prepared a workaround which forces each worker thread to perform a dummy memory allocation at the beginning of its lifetime. That workaround will be included in BIND 9.16.29.\nNote that while this problem was discovered in BIND 9.16 running on FreeBSD, analyzing its nature uncovered that it also affects BIND 9.18+ binaries linked with jemalloc on other platforms. We have not yet received any reports about similar issues with BIND 9.18 - and given that this problem will also be worked around in BIND 9.18.3+, hopefully we never will!\nTying Up Loose Ends The one burning question that was left unanswered was this: how come we managed to trigger the issue on Linux under rr, but not without it? One of the limitations of rr is that it emulates a single-core machine, which caused jemalloc to allocate just a single arena for all named worker threads. Knowing this, we were able to reproduce FreeBSD-like jumps in memory use after each curl request on Linux without rr by setting the MALLOC_CONF environment variable to narenas:1, emulating what happens when named is run under rr.\nBut\u0026hellip; if this change is necessary to trigger the problem, then why was it originally noticed on FreeBSD, where named uses multiple jemalloc arenas? It\u0026rsquo;s\u0026hellip; complicated. The answer lies in the nitty-gritty, OS-specific details of how jemalloc recycles dirty pages when realloc() calls are issued for large extents. (This is what the json-c library does when it converts an in-memory representation of a JSON object to a string. Specifically, it resizes the target buffer to twice its current size each time it runs out of space. Given that serialized JSON statistics are several megabytes large for hosts with numerous logical CPUs, the serialization process causes realloc() to be called with several-megabytes-large target sizes.) One example difference between Linux and FreeBSD is that on the former, jemalloc prefers to retain dirty pages when it would normally give them back to the kernel on other operating systems. But there are more subtleties involved in the process that could easily take a separate article to describe; suffice it to say that we did manage (albeit very rarely) to observe several-megabytes-large (FreeBSD-like) jumps in memory use after a single curl request on Linux - they just do not happen as often there as on FreeBSD.\nWhere We\u0026rsquo;re Going, We Won\u0026rsquo;t Need 32768 udpdispatch Tasks While investigating this issue, we employed a variety of techniques for debugging memory-use issues in BIND 9. We also learned quite a bit about how jemalloc works and how to interpret its arena statistics, which will certainly pay off the next time we have to investigate a similar issue. We also found out that one of the factors contributing to the problem (a huge number of tasks created on certain types of hosts) is already gone from newer BIND 9 versions, which reassures us that our continuous refactoring efforts are worth their non-trivial cost in the long term. Back to work, then!\n","permalink":"/blogs/jemalloc-glitch/","tags":["BIND","performance"],"title":"How an OOM Issue With BIND 9 Led to Discovering a Memory Allocator Glitch"},{"categories":["Kea","Open Source","Events","Performance"],"contents":"Maximize Performance or Flexibility in Your Kea DHCP Installation Kea can be deployed in multiple configurations, to optimize either performance or flexibility. In this webinar Tomek Mrugalski (ISC\u0026rsquo;s Director of DHCP Engineering) will discuss which design choices give maximum performance and explain how to tune your existing configuration for better performance. He will also share the latest Kea performance test data.\nRegister for the session, which will be recorded and posted on our website and our YouTube and BrightTALK channels. The session will last approximately one hour, including Q\u0026amp;A as time permits.\nPlease check your invite from Zoom for the correct time in your timezone.\nSpeaker Information: Tomek Mrugalski holds an MSc (2003) and PhD (2010), both related to DHCPv6. He spent seven years at Intel before joining ISC in 2011, where he leads the DHCP engineering team. Tomek was the original designer and developer of the Kea DHCP server, and leads development of ISC DHCP and Stork as well as Kea. Tomek has been active in the IETF since 2009 and was the DHC working group co-chair at IETF until 2020. He has 13 RFCs published, including the specification for DHCPv6bis (RFC8415), for which he was primary author.\n","permalink":"/blogs/kea-performance-tuning/","tags":["Kea","Webinars","Events","Training"],"title":"Kea Performance Tuning Webinar"},{"categories":["Open Source","ISC","Annual Reports"],"contents":"It\u0026rsquo;s time once again to reflect back on our accomplishments of the past year. And there were many! We are pleased to share our 2021 annual report with you.\nWe invite feedback at marketing@isc.org, and we thank our support customers and contributors for their ongoing support of our mission.\n","permalink":"/blogs/iscs-2021-annual-report/","tags":["2021","ISC","BIND","Kea","DHCP","ISC DHCP","Stork"],"title":"ISC’s 2021 Annual Report"},{"categories":["ISC","Company News"],"contents":" Most of our recent engineer profiles have focused on new ISC staff members, but now it\u0026rsquo;s time to get up close and personal with someone who\u0026rsquo;s been here more than 11 years: Eddy Winstead!\nEddy is ISC\u0026rsquo;s Sales Engineer, the technical voice of our sales team. Prospective customers often speak to him before signing up for ISC\u0026rsquo;s support services, to better understand how our support team can help their organization succeed. Eddy is also in charge of training and consulting engagements for our customers.\nHe grew up in tobacco-farming country in the southeastern US, and was first introduced to UNIX as an intern with the MCNC North Carolina Research and Education Network (NCREN) in high school and college. After earning his engineering degree, Eddy was immersed in the intricacies of BIND 9 and ISC DHCP in his full-time job at NCREN, where he served as Hostmaster for about 15 years. He says that everything that\u0026rsquo;s been applicable to his work at ISC has been learned on the job - except for typing, \u0026ldquo;which turned out to be the most valuable class I took in high school!\u0026rdquo;\nEddy\u0026rsquo;s work puts him in constant contact with ISC\u0026rsquo;s customers, and he loves that interaction. He says, \u0026ldquo;Prior to Covid, I had the opportunity to travel the world and do training and consulting for ISC customers. The travel was great, but my favorite part was meeting customers. I\u0026rsquo;ve made some great friends and have been shown parts of the world I wouldn\u0026rsquo;t have seen without those connections. I\u0026rsquo;ve done some great hikes, had some great meals of local cuisine, and visited some off-the-beaten-path places with ISC customers.\u0026rdquo;\nISC was a natural fit for Eddy\u0026rsquo;s talents. As he says, \u0026ldquo;BIND was just what you used for DNS back in the \u0026rsquo;90s, so that\u0026rsquo;s where work with open source began for me.\u0026rdquo; He has a good mix of technical and business sense, as well as both best-practice engineering and operational knowledge, which make him invaluable to our customers.\nDuring the Covid era, he has had to do most customer sessions virtually. He says, \u0026ldquo;I\u0026rsquo;ve made some great connections this way, but I look forward to meeting more of these folks in person in the future.\u0026rdquo;\nWhen he\u0026rsquo;s not eating deep-fried pickles (his Southern birthright), you may find Eddy playing old-school video games or Nintendo with his kids, listening to electronic music, or hiking. And if you ever spend time with him on Zoom, you may see him playing fetch with one of his cats.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. We welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-winstead/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Eddy Winstead!"},{"categories":null,"contents":"Below is a listing of all the public mailing lists on lists.isc.org. Click on a list name to get more information about the list, or to subscribe, unsubscribe, or change the preferences on your subscription. List administrators, you can visit the list admin overview page to find the management interface for your list.\nThis page is manually edited; in case of errors, visit the ISC Mailman list overview.\nIf you are having trouble using any list, please contact mailman@lists.isc.org.\nMailing Lists Description bind-announce BIND 9 announcements mailing list bind-users BIND 9 users' mailing list; the searchable non-ISC mail archive is here dhcp-announce DHCP announcements mailing list dhcp-users ISC DHCP users' mailing list; the searchable non-ISC mail archive is here dhcp-workers DHCP workers' mailing list inn-announce INN announcements mailing list inn-bugs INN bugs mailing list inn-committers INN committers' mailing list inn-workers INN workers' mailing list irrtoolset Discussion of the Internet Routing Registry Toolset (IRRToolset) kea-announce News and announcements of new releases for Kea DHCP kea-dev For discussion about Kea development, including proposed new feature design kea-users Discussion about installing and managing the Kea software suite for DHCP moderators Discussion list for Usenet newsgroup moderators noc-notice Notification list for network events stork-users Mailing list dedicated to discussion of Stork software development training-info Announcements-only list for ISC training events and webinars Most Popular Mailing Lists The archives of all the mailing lists are viewable by clicking on the list information page above.\n","permalink":"/mailinglists/","tags":null,"title":"Public Mailing Lists"},{"categories":["BIND","DNS"],"contents":"Today ISC is pleased to announce the release of BIND 9.18.0. This is the first stable release that contains support for DoT and DoH. This branch will be supported for 4 years, through the end of 2025. With this new branch, the BIND 9.11 branch is officially EOL. We will continue to issue security patches for 9.11 for the remainder of Q1 2022, but that will be the end of maintenance for 9.11.\nThis new release of BIND is available on our downloads page.\nSignificant work covered in the 9.18.0 branch includes:\nSupport for securing DNS traffic using Transport Layer Security (TLS). TLS is used by both DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). Support for zone transfers over TLS (XFR-over-TLS, XoT) for both incoming and outgoing zone transfers. The dig tool is now able to send DoT queries (+tls option). Support for OpenSSL 3.0 APIs was added. You can read more about this new edition of BIND in the release notes.\nReferences BIND Administrative Reference Manual for 9.18.0 We have updated the BIND Significant Features Matrix. Please see this KB article on Changes to be aware of when moving from BIND 9.11 to 9.16, as well as this article on BIND Memory Consumption Explained. ISC\u0026rsquo;s Software Support Policy and Version Numbering explains the release cycle. ","permalink":"/blogs/bind918/","tags":["BIND"],"title":"BIND 9.18.0 - A New Stable Branch"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"2021 was, sadly, a lot like 2020 for a lot of people: a year of private disappointments and losses for many of us as individuals. It was, however, an extremely successful year for ISC.\nWe completed work on major new stable versions of BIND 9, Kea DHCP, and Stork: BIND 9.18, Kea 2.0, and Stork 1.0. We had some rough patches this year with BIND 9.16, which suffered because an extensive refactoring project was simply not quite finished when we created the branch. Thankfully we have successfully completed that now, and its successor, BIND 9.18, is looking very good. The Kea user base is growing and both Kea and Stork are maturing. We continue to balance long-term ambitious changes with smaller bug fixes and new features. All of our projects maintained a fast cadence of mostly monthly releases, while improving their QA processes.\nWe made continued progress in updating and adding F-Root nodes. We also increased participation and leadership from ISC in policy discussions about Root Server Operator (RSO) governance, with the active engagement of ISC’s new general counsel, Rob Carolina.\nIt is always reassuring to see that ISC can still attract talented people who are excited to work on open source. We added new staff on the BIND and DHCP development teams, and new management and new staff for our support team.\nFinally, ISC’s financial picture is very solid right now. We will probably always be anxious about our financial security, given that we are trying to fund the development of free software, but we are starting 2022 in a good place. We ended 2021 with nearly 150 support subscribers - over 90% of them renewals from 2020 - with new customers more than making up for the few we lost. More of our customers are subscribing for both BIND and Kea support, and we are gaining more traction with larger enterprises. We are at what we feel is an optimal size, and are feeling cautiously optimistic about 2022.\nThe State of F-Root - Ray Bellis, Director of DNS Operations Despite the Covid-19 pandemic lasting far longer than anyone expected, 2021 was a good year for F-Root, especially since node deployment depends on people actually being physically present in data centers to rack up and connect the hardware. We deployed F-single nodes at nine new locations, and upgraded “classic” installations to F-single configuration at a further eight locations.\nAs of the end of 2021, F-Root consists of:\n3 “global” sites (at the end of 2020, there were 3) 58 “F-single” sites (2020: 42) 8 “classic” sites (2020: 16) 215+ additional nodes hosted by Cloudflare The global sites host 2x F-Root servers as well as F-Root management infrastructure. An F-single is our current base configuration, which relies on a single 1U server to provide both the F-Root service and BGP Anycast. A classic site comprises 2x F-Root servers, a console server, routers, and switches. The classic sites are all expected to be phased out and upgraded to the F-single configuration by the end of 2022 and we have a strong pipeline of new installations and upgrades due too.\n(To see a list of all the current F-Root nodes, visit https://www.root-servers.org/ and select F from the Root Servers list.)\nOur provisioning systems continue to evolve, with lots of effort going into ensuring the consistency of deployments and to improve our monitoring capabilities: Developing improvements to our system monitoring will continue to be the main focus during 2022 (aside from actual node deployments).\nSite Changes Completed in 2021 New Sites\nCode Location Sponsor ABQ1 Albuquerque, NM US NMIX AGP1 Malaga, ES Startnix CCU1 Kolkata, IN IIFON DAD1 Da Nang, VN VNNIC DEN2 Denver, CO, US Peaktera PAH1 Paducah, KY, US PIE RIC1 Richmond, VA, US Ninja-IX SCL2 Santiago, CL PIT TAS1 Tashkent, UZ SNS-IX Upgraded Sites\nCode Location Sponsor AKL1 Auckland, NZ InternetNZ FRA1 Frankfurt, DE DE-CIX GRU1 São Paolo, BR NIC.BR LIS1 Lisbon, PT DNS PT OSL1 Oslo, NO NIX PRG1 Prague, CZ ISC / Peering.cz SCL1 Santiago, CL NIC.CL TPE1 Taipei, TW CWT BIND 9 - Ondřej Surý, Director of DNS Engineering Even with the pandemic raging outside, 2021 was a productive year for the BIND 9 team. Notably, we stabilized BIND 9.16 to a point that it has been designated as an Extended Support Version (ESV).\nAt the beginning of 2021, Petr Špaček joined the BIND 9 team and began improving our recursive performance testing. Later in the year, Aram Sargsyan jumped right into the fray, working on OpenSSL improvements. In another major personal achievement, Mark Andrews hit 20 years at the company, and he has been named Distinguished Engineer to reflect his long-time commitment to BIND 9.\nRelease Schedule At the beginning of 2021, we decided to modify our BIND release model again, to lengthen the time between major branches and provide extended support for every stable branch. You can find out more in our blog post on the topic. This leaves us with a 4-year support cycle for each Stable/ESV release and a 2-year overlap to allow a graceful migration period for our users.\nWe have reduced the changes backported to the BIND 9.11 branch to a bare minimum, keeping the promise that only security and high-impact issues will be fixed in BIND 9.11 as it nears end-of-life. Code changes in BIND 9.16 were unfortunately greater in the first half of 2021, but since we marked BIND 9.16 as ESV in mid-year, we are gradually reducing the amount of work backported from the development branch to BIND 9.16.\nBIND 9.16 The BIND 9.16 network manager code was stabilized in the first half of 2021 to the point where it is stable and reliable. The TCP-handling parts of the network manager code had to be rewritten from scratch, but while this was a difficult choice to make during a stable cycle, it has proved to be reliable since that code was merged.\nPerformance BIND 9.16 authoritative performance was already in the “good enough” ballpark, achieving around 1Mqps; during the year, we significantly improved the recursive performance testing, building on Petr’s experience. The new recursive testing framework allowed us to identify and fix bottlenecks in the existing code, which reduced the number of threads by one-third by moving internal tasks to run in the network manager worker loops. This at least doubles BIND 9.16\u0026rsquo;s performance compared to BIND 9.11. We published this blog post where recursive performance is discussed in more detail: https://www.isc.org/blogs/bind-resolver-performance-july-2021/. We continued to work in this area, refactoring the dispatch (DNS client) code in the upcoming BIND 9.18 release, bringing even higher performance to the recursive function of BIND 9 while reducing memory consumption at the same time.\nMemory In BIND 9.18, the BIND 9 native memory allocator was removed and replaced by jemalloc, which could be optionally compiled in and is recommended for all workloads. BIND 9.18 uses the same or less memory than BIND 9.11 while the performance more than doubled.\nUsers upgrading from 9.11 to 9.16 reported significantly higher memory usage. We fixed this by backporting some of the improvements made for 9.18 to 9.16. This is discussed in length in this KB article: https://kb.isc.org/docs/bind-memory-consumption-explained.\nPrivacy The upcoming BIND 9.18 will include support for DoT (DNS over TLS), XoT (XFR over TLS), and DoH (DNS over HTTPS), protocols that add encryption-layer support for the new privacy-focused protocols. This is important for users who want to enhance the privacy of their client-resolver connections without relying on the “big tech” providers. We will continue our work in this area, so BIND 9 will keep being a great choice for people who want additional privacy.\nProtocol Engineering ISC\u0026rsquo;s engineers participate in IETF protocol development. We have collaborated with NLNetLabs on interoperability changes to DNS Cookies - both writing a standard RFC and implementing the new SipHash-based algorithm for DNS Cookies. We continued to collaborate on the new iteration of Catalog Zones that should make all the open source versions interoperable, allowing for more diversity in the secondary servers. BIND 9 now includes support for HTTPSRV records, finally solving the long-standing \u0026ldquo;CNAME-flattening/ANAME\u0026rdquo; problem.\nDocumentation The BIND Administrative Reference Manual (ARM) has been converted from DocBook (XML-based) to Sphinx (RST-based) and is now regularly published at https://bind9.readthedocs.io/.\nInclusive Language The BIND source code, documentation, and configuration options have been changed to follow RFC 8499, DNS Terminology, and to stop using terminology that is considered offensive. The new language is the default in the configuration but the old configuration syntax still works, to prevent breakage of existing deployments. The main git branch is now called main.\nSecurity In 2021, BIND 9.11 had four CVEs and BIND 9.16 had five CVEs. Details can be found in the BIND 9 Security Vulnerability Matrix in our Knowledgebase.\nDuring investigation of a GSS-TSIG vulnerability in the BIND 9 implementation of SPNEGO, BIND 9 engineers found (and reported, of course) a serious vulnerability of the Heimdal Kerberos implementation used in FreeBSD and other BSDs.\nPerformance Impacts from Zone Transfers After at least one customer reported performance degradation during zone transfers when deploying BIND 9.16, we analyzed the code and implemented a new approach to these transfers. Previously, BIND 9 used a technique where it would “quantize” long-running jobs into smaller chunks and intermingle this small-chunk processing into regular query-response processing. Under the new approach, long-running jobs are offloaded to a separate thread pool that runs the jobs independently, leaving scheduling to the operating system kernel. Instead of quantizing the job into arbitrarily sized chunks, the long-running job blocks the other threads for the least possible time - locking and unlocking the shared resources only for a short period of time.\nBuild System The build system used by BIND 9 has been completely rewritten to use autoconf, automake, and libtool, making it more modern and easier to understand and modify.\nWindows Support for building BIND 9 on Windows has been removed in the development release; the last major release supporting Windows is 9.16.\nPKCS#11 During the development cycle, it became clear that supporting different implementations of PKCS#11 natively in BIND was inefficient and expensive. Therefore, we contracted with an external party to improve the libp11-based engine_pkcs11 for OpenSSL. This has been done: performance and stability were improved to the point where we were able to drop BIND\u0026rsquo;s native PKCS#11 implementation in favor of the OpenSSL-based PKCS#11 implemented in engine_pkcs11. In 2022, we plan to implement the PKCS#11 engine for OpenSSL 1.x and the PKCS#11 driver for OpenSSL 3.x, without using libp11.\nFuture Development In 2022 we plan to continue fixing bugs, adding new features requested by users and customers, and refactoring old code. A major project planned for the next development cycle is the refactoring of the venerable red-black tree database (RBTDB) implementation used to store authoritative zone records and cached RRsets.\nBIND 9 Quality Assurance - Michał Kępień, BIND QA Manager We recently published an interview with the BIND QA Manager.\nQA has implemented a unified monthly release cycle for both security and normal patch releases, bringing more predictability and stability to the release process. Continuous Integration now runs completely in the GitLab CI, and we no longer use Jenkins nor even the older “Robie.” All their functions were either moved to the GitLab CI or the need for them has been eliminated. The QA team designed and implemented a completely new recursive performance testing framework which provides much-needed data for making data-based decisions. This new framework nicely complements our existing authoritative performance testing in perflab. Kea DHCP, ISC DHCP, and Stork - Tomek Mrugalski, Director of DHCP Engineering Despite all the craziness in the world and in the personal lives of some team members, 2021 was a very good year for the project. We managed to release Kea 2.0.0, which was a major milestone in the project history. Bumping the major number to 2 (we\u0026rsquo;ve been releasing 1.x since 2015) reflects the overall feeling that the code base has matured significantly and is usable even in the largest and most demanding ISP deployments.\nGrowing User Base This confidence is backed up by the growing number of existing and new users. We ended 2021 with 51 Kea support customers, and during the year we closed 521 engineering tickets, including new features, bug fixes, documentation improvements, and more. We shipped 12 releases, several code drops, and quite a few patches to customers who couldn\u0026rsquo;t wait for the next monthly release. We\u0026rsquo;re also seeing more support customers who are not ISPs; the expansion into enterprise and university markets is both exciting and scary. The trend is also visible in the new features being requested and implemented.\nHA+MT and Performance Kea is now fully multi-threaded, including the tricky High Availability scenario with two servers communicating over multiple connections and multiple threads. Being able to process lease updates in parallel (with the bottlenecks of a shared single UNIX socket and single TCP connection eliminated) really gave impressive results. In the most extreme scenarios, Kea performance improved tenfold over prior versions. More typical scenarios are less dramatic, but Kea 2.0 is still several times more performant than Kea 1.8. In the most efficient scenario, Kea is able to assign 38K new leases per second; we\u0026rsquo;re approaching levels where our testing tools are not able to keep up. It\u0026rsquo;s a good problem to have.\nGSS-TSIG We completed a major sponsored development project: GSS-TSIG. This was by far the biggest custom feature in the history of Kea. GSS-TSIG provides integration with Microsoft Windows environments, using Active Directory that in turn uses Kerberos. The team worked closely with the customer and provided many engineering checkpoints, so the customer could oversee the technical details and perform early integration.\nSecurity For much of its early life Kea was focused on delivering protocol features, but since it is more or less complete in this regard, our development efforts now shift towards improving management. This is of vital interest for many existing and prospective users. Kea\u0026rsquo;s REST API interface can now be protected with TLS, including the mutual mode where both server and connecting client certificates are validated. We also developed support for MySQL connections over TLS, and there\u0026rsquo;s a new section in the Kea ARM about security.\n2021 was the second year in a row when we did not publish any security advisories for Kea. However, our internal lodge of scoffers fears that due to the new TLS and Kerberos code in Kea, our good fortune with security incidents may come to an end. Time will tell!\nQuality Assurance We migrated our automated Kea build and testing farm to Amazon Web Services (AWS), and took this opportunity to review and significantly update our testing procedures. Previously, we had a fixed number of virtual machines (VMs) that were always on. With AWS, we migrated to an on-demand mode, where the VMs are created to run a specific set of tests and then are destroyed afterward. This allows us to run tests on more systems. At the end of the year, we had 3789 system tests, running tests on 14 operating systems (various versions of Alpine, CentOS, Debian, Fedora, FreeBSD, and Ubuntu). With over 8,400 unit-tests in Kea code, that gives us over 118,000 test/OS combinations that are being run on every commit to master.\nThe Kea sources are now 1,008,159 LOC (that\u0026rsquo;s over a million lines of code). That code is now scrutinized using multiple automated tools, including but not limited to Coverity Scan, TSAN (thread sanitizer), ASAN (address sanitizer), UBSAN (Undefined Behavior Sanitizer), gcov (coverage report), and more. We also run extensive performance tests that check a variety of simple and complex scenarios, including simulating millions of DHCP clients, thousands of subnets, millions of reservations, and more. We now provide native packages for many OSes that greatly simplify the installation effort.\nStork 2021 was also a good year for Stork. The team put out nine releases in total, including 1.0 in December. While the project is still miniscule compared to Kea, the user base is growing rapidly, reporting bugs and requesting new features. The project has gotten many new features: TLS support; lease inspection; a configuration review module (Stork can now make suggestions about items to improve in Kea configs); better Prometheus and Grafana integration with new statistics; a full configuration viewer for Kea; a service configuration dump tool with the ability to get all the debugging information typically needed by ISC\u0026rsquo;s support team, such as config files, log files, a database dump, the OS/Kea/Stork versions, etc., in one tarball; and many more. We now have a more-or-less complete dashboard for Kea and are now shifting towards making Stork capable of configuring Kea.\nStork now has a bit over 123,000 lines of code. The team closed 161 GitLab issues in 2021.\nISC DHCP Unfortunately, 2021 was not a productive year for ISC DHCP. With our DHCP engineering resources fully dedicated to general Kea work, GSS-TSIG, and rapidly increasing user requests for Stork, we decided to focus on these newer projects rather than the legacy ISC DHCP. Nevertheless, we\u0026rsquo;re now getting ready to release ISC DHCP 4.4.3, which will be the last release with both the client and relay components. While the release will be finished in 2022, much of the preparation for this updated release was done in late 2021.\nSupport - Chuck Stearns, Director of Technical Support We ended the year with just under 150 customers, including 17 that were new for 2021, and after losing only six customers in 2021. Over 90% of our 2020 support customers renewed their agreements with us.\nThe support team changed significantly in 2021, with a new manager and a doubling of the number of support engineers, from three to six.\nHere is a sample of the things the support team accomplished in 2021:\nWe reduced the median age of active support tickets by 78% in the second half of the year. We revamped our Configuration Review process to make it more timely and manageable. We increased our use of Zoom for customer meetings, instead of relying nearly exclusively on email. The support team attended Kea training. The team managed five CVE/ASN cycles and four Operational Notifications. The team handled 440 support tickets. In the last six months of 2021, we had 231,354 views of our Knowledgebase at https://kb.isc.org, with 83,438 reads. We published 33 new KB articles and updated more than 150 articles. We added SEO text to all of the articles to improve their find-ability via the Internet. Here are some things we\u0026rsquo;re looking to accomplish in 2022:\nWe\u0026rsquo;ve begun the process of migrating to a new support ticketing system (Salesforce). We plan to perform a review of our support processes, specifically the Advance Security Notification, software delivery, and customer/account maintenance processes. We hope to improve our knowledge management, specifically via better maintenance of the KB and software administrators\u0026rsquo; manuals. Other ISC Activities Policy Development - Rob Carolina, General Counsel F-Root Operations and RSO Governance ISC\u0026rsquo;s status as a root service operator (RSO) spawned some significant work related to the ongoing discussions of a new root server system governance structure (RSS GS). In mid-2021, the community of 12 RSOs began to review and discuss a draft proposal for a new RSS GS put forward by the Root Server System Governance Working Group (RSS GWG), chartered by ICANN.\nThis resulted in the adoption and publication on 17 November 2021 of \u0026ldquo;RSSAC058: Success Criteria for RSS Governance Structure\u0026rdquo; and \u0026ldquo;RSSAC059: RSSAC Advisory on Success Criteria for the Root Server System Governance Structure.\u0026rdquo; These documents were warmly received by the ICANN Board of Directors and they will now form part of a revised RSS GWG work plan.\nPublic Policy The European Union Proposal for a new NIS2 Directive became a significant focus for ISC in March 2021. NIS2 (a proposed law building on the original NIS Directive to strengthen European approaches to cyber security) was drafted in a manner that appeared to call for EU member states to regulate the cyber security arrangements of all 12 global RSOs. While ISC applauds the desire to strengthen cyber defenses, ISC submitted public comments on the proposed Directive specifically calling out the danger of any sovereign state attempting to directly regulate the world\u0026rsquo;s root server system (RSS). We pointed out that, far from enhancing the resilience and security of the Internet, sovereign intervention in RSS operations could destabilize the RSS and DNS. We explained that regulatory intervention in the RSS by one sovereign state could prompt (potentially conflicting) regulatory intervention by other sovereign states. We suggested that this would, in turn, risk fragmenting the Internet as we know it. Some of our fellow RSOs made similar observations. The European Parliament subsequently amended the draft Directive in late 2021 to take root servers out of its scope, but the matter is not finally resolved. We now wait to see whether the European Commission will attempt to negotiate the reintroduction of the RSS into the law.\nISC joined the https://openinventionnetwork.com, the patent non-aggression community.\nWebinars, Talks, and Blogs We held and recorded nine technical webinars, providing ongoing training to our users. In addition to posting the recordings on ISC’s YouTube channel, we also created a BrightTalk channel to promote our recordings to a wider audience.\nISC staff gave six conference talks, archived on the ISC website at https://www.isc.org/presentations/. This is fewer than in prior years, because - of course - the pandemic has ruined everything.\nWe posted 27 ISC blogs. We published the 2020 Annual Report and updated and re-published the ISC timeline.\nWe launched the ISC swag store, using Shopify and Printful to sell and create some fun ISC-branded items.\nISC donated to several non-profit NOGs, as well as to Kea conservation in New Zealand, showing our support for some important organizations.\nRecognition In the past, some years we have simply been unable to keep up with the number of externally created issues. In 2021, we did some impressive catching-up on that front.\nWe don’t always fix the issues reported, if the software version is old or if we disagree with the user’s interpretation, or if the reward doesn’t seem worth the effort. Last year though, we fixed more than 80 issues reported by open source users through our GitLab.\nWe are especially grateful to reporters who:\nSubmit issue reports for problems that exist in currently supported software versions Test the development branch so we can find and fix issues before they end up in a stable version Include logs, crash dumps, and relevant configuration Answer followup questions from the developers attempting to root cause the issue Verify the fixes we produce Provide a patch or suggest a fix Here are some of these technical contributors from the user community whose reports improved BIND in 2021:\nAdrien BERNARD - reported a bug Alexander Sulfrian - reported a regression in check-names Anand Buddhdev - pointed out a documentation error Andrej Podzimek - found a bug in RNDC 9.17.16 Andrey Blokhintsev - reported an issue with AXFR under FreeBSD and validated the fix for us Andrey Blokhintsev and (several others) - reported an assertion failure which was published as CVE-2021-25218 Benjamin Gentil - reported a deadlock in rndc in 9.11 Bhargava Shastry - poked us to use Google’s OSS-fuzz program Carl Byington - found a bug in versions 9.16.13, 9.16.13-S1, 9.16.14, 9.16.14-S1, 9.17.11, and 9.17.12 Cesar Kuroiwa - provided extensive help in reproducing a crash in XFR over TLS Chris Caputo - tested our libuv integration in the development branch and proposed a logging change Chris Malton - submitted an excellent bug report on IDN support, which led to a fix with help from Tim Rühsen from libidn2 Damir Islamov - encountered a problem when upgrading and provided all the information we needed to find and fix it Daniel Stirnimann - found several bugs in the KASP feature, did repeated testing, and validated fixes. Daniel also pinpointed a performance issue with DNSSEC-verify, and in general is a very helpful DNSSEC tester David Ford - reported a compile issue David M Walker - reported a build failure that was fixed Egbert - found a bug in 9.16.15 Erich Eckner - reported an issue with AXFR in 9.16.12 Flindeberg - submitted a memorable enhancement request, although we opted not to act on it Friedlho - reported a dig crash Greg Rabil - uncovered an issue with static linking and OpenSSL Hakan Gustafsson - reported an issue with updating signed zones Håvard Eidnes - reported some \u0026ldquo;impossible\u0026rdquo; statistics and along with Cathal Mooney tracked the issue for quite a while until it was finally fixed. Jean-Christophe Manciot - reported a journal bug that we fixed in the April releases Jean-Christophe Manciot \u0026amp; Vladimir B. Vinogradov- identified a permissions issue causing their installs to fail Jim Pirzyk - reported a bug in dig Jim Popovitch - identified some packet fragmentation issues Jim Westfall - created a confidential issue around memory locking, fixed in early 2021 Jinmei Tatuya - contributed a patch to make the BIND plug-in function asynchronous John Heasley - went above and beyond reporting a complex bug with a lot of followup John Perone - found a bug in the 9.17 development branch John W. O\u0026rsquo;Brien - pointed out a documentation bug in auto-dnssec Josh Soref - reported a documentation bug, probably introduced when we re-formatted for RST JP Mens - reported a usability issue we addressed (and a couple we didn\u0026rsquo;t) Klaus Darilion - created a confidential issue with AXFR in 9.16.2 Kris Karas - found and verified the fix for a deadlock in 9.16.18 and sparked a discussion about integrating NSEC3 into dnssec-policy Laurent Frigault - submitted a crash dump with all the information we needed to find and fix it Laurent Gouhier - reported a performance loss in 9.16 Legacy1 - reported a Windows issue, which Richard Neal reproduced it for us LM Jogbäck - found an issue with DNSSEC policy Lusia Kundel - submitted a typo fix in nsupdate help Marc Dequènes (Duck) - reported several issues with dnssec-policy and tested the fixes (Quack!) Mathieu Arnold - reported an issue that was first logged with FreeBSD. Several other BSD users chimed in, including Xin Li and Dimitry Andric, until John W. O\u0026rsquo;Brien confirmed that the issue had been fixed in a later release Michael Osipov - reported a bug in a test, which was fixed, then found yet another problem with Kerberos linking, and also found an issue with nsupdate Michel Lespinasse - reported an issue with DNSSEC key rotation with multiple views Ole Bjørn Hessen - reported that BIND 9.16.8 does not honor CPU affinity Oskar Stenman - noticed that \u0026ldquo;masters\u0026rdquo; were not renamed to \u0026ldquo;primaries\u0026rdquo; in CATZ Patrick McLean - submitted a patch to improve dig’s timestamp accuracy Pdw-mb - found a documentation error Per Lundberg - added to an existing bug report instead of creating a new one (thank you for this!) Petr Menšík of RedHat - made many contributions throughout the year, including several merge requests Phil Regnauld - pointed out a typo in the DNSSEC Guide Piolink-kycho - found a memory leak during stress testing Rainer W. - provided a bug report with suggested fixes Ralf Jung - reported an issue with in-line signing, confirmed by Jean-Christophe Manciot Richard Laager - found an interoperability issue we ended up deciding not to add a workaround for, but which we appreciated Roland Illig - took the time to report a minor documentation typo Rosen Penev - submitted a patch Sara Dickinson - helped in testing and specifying requirements for XoT Sean Zhang - found and correctly diagnosed the infamous issue with the missing “w” that resulted in security fixes across all supported versions in June Stacey Marshall - reported a Solaris configuration issue Stanislav Levin - found a problem with the cache for managed keys in 9.16 Stuart Henderson - reported an obscure test issue with OpenBSD and Raspbian Sven Strickroth - suggested an additional log message to improve usability Thomas Amgarten - found a tricky issue with journal files Thomas Amgarten - helped us with KASP troubleshooting Timothe Litt - highlighted the use of http references in the documentation Tobias Günther - prompted us to update Launchpad for new Ubuntu versions Triatic - tested BIND’s DoH implementation vs Windows Vsevolod Volkov - observed a memory leak on FreeBSD and verified the fix for us Wil Knoll - reported a bug in QNAME minimization We also offer our thanks to:\nOur stalwart ISC DHCP community experts, Simon Hobson, Sten Carlsen, Bill Shirley, Bob Harold, Niall O’Reilly, Glen Satchell, and Gregory Sloop, who are helping a whole new generation of users with their ISC DHCP issues via the dhcp-users mailing list.\nNumerous other BIND users, Kea users, and ISC DHCP users, who provided expert advice to others on our user mailing lists. ISC staff could not possibly answer all these questions ourselves, not only because of the number of questions, but because we don’t have the depth and variety of operational experience our users have. We are grateful for these contributions of technical expertise.\nThe many sponsors of our F-root nodes. They donate rackspace, purchase servers, help support our operating costs and generally make it possible to provide free root services to the Internet.\n","permalink":"/blogs/2021-accomplishments/","tags":["2021","ISC"],"title":"ISC's 2021 Accomplishments"},{"categories":null,"contents":"Introduction This Acceptable Use Policy (“AUP”) sets forth the principles that govern the use of various systems, services, and products made available by Internet Systems Consortium (\u0026ldquo;ISC\u0026rdquo;). The AUP has been created to promote the integrity, security, reliability, and privacy of ISC\u0026rsquo;s systems and networks.\nDepending on the circumstances, \u0026ldquo;ISC\u0026rdquo; may refer to Internet Systems Consortium, Inc. (a 501(c)(3) company) or Internet Systems Corporation (a wholly owned subsidiary). ISC\u0026rsquo;s commercially contracted services are normally supplied by Internet Systems Corporation.\nCompliance With Law Customers shall not post, transmit, re-transmit or store material on or through any of ISC\u0026rsquo;s system services or products that: (i) is in violation of any local, state, federal or non-United States law or regulation; (ii) threatening, obscene, indecent, defamatory or that otherwise could adversely affect any individual, group or entity (collectively, “Persons”); or (iii) violates the rights of any person, including rights protected by copyright, trade secret, patent or other intellectual property or similar laws or regulations including, but not limited to, the installation or distribution of “pirated” or other software products that are not appropriately licenses for use by customer.\nProhibited Uses of ISC\u0026rsquo;s Systems, Services and Products This AUP identifies the actions that ISC considers to be abusive, and thus, strictly prohibited. In addition to the other requirements of this AUP, Customer may only use ISC’s systems, services and products in a manner that, in ISC’s sole judgement, is consistent with the purposes of such systems, services and products. If a customer is unsure whether a contemplated use or action is permitted under the AUP, the customer should e-mail ISC with a description of the proposed use at info@isc.org for a determination as to whether the use is permissible under this AUP. The examples identified in the subsections below are non-exclusive and are provided, in part, for guidance purposes.\nThe following uses of ISC\u0026rsquo;s systems, services and products as described in subsections A through E are expressly prohibited:\nProhibited Actions: General Conduct Transmitting on or through any of ISC\u0026rsquo;s systems, services, or products any material that is, in ISC\u0026rsquo;s sole discretion, unlawful, obscene, threatening, abusive, libelous, or hateful, or encourages conduct that may constitute a criminal offense, may give rise to civil liability, or otherwise may violate any local, state, national or international law.\nTransmission, distribution, or storage of any information, data or material in violation of United States or state regulations or law, or by the common law.\nViolations of the rights of any Person protected by copyright, trade secret, patent or other intellectual property or similar laws or regulations.\nActions that restrict or inhibit any Person, whether a customer of ISC or otherwise, in its use or enjoyment of any of ISC’s systems, services or products.\nResale of ISC\u0026rsquo;s services and products, without the prior written consent of ISC.\nDeceptive on-line marketing practices.\nFurnishing false data on the signup form, contract, or online application, including fraudulent use of credit card numbers (such conduct is ground for immediate termination and may subject the offender to civil or criminal liability).\nProhibited Actions: System and Network Security Attempting to circumvent user authentication or security of any host, network, or account (“cracking”). This includes, but is not limited to, accessing data not intended for the customer, logging into a server or account the customer is not expressly authorized to access, or probing the security of other networks (such as running a SATAN scan or similar tool).\nEffecting security breaches or disruptions of Internet communications. Security breaches include, but are not limited to, accessing data of which customer is not an intended recipient or logging onto a server or account that customer is not expressly authorized to access. For purposes of this section, “disruption” includes, but is not limited to, port scans, ping floods, packet spoofing, forged routing information, deliberate attempts to overload a service, and attempts to “crash” a host.\nUsing any program/script/command, or sending messages of any kind, designed to interfere with a user’s terminal session, by any means, locally or by the Internet.\nExecuting any form of network monitoring which will intercept data not intended for Customer.\nProhibited Actions: E-Mail Harassment, whether through language, frequency, or size of messages, is prohibited.\nSending unsolicited mail messages, including the sending of “junk mail” or other advertising material to individuals who did not specifically request such material (“e-mail spam”). Customers are explicitly prohibited from sending unsolicited bulk mail messages. This includes, but is not limited to, bulk-mailing of commercial advertising, informational announcements, and political tracts. Such material may only be sent to those who have explicitly requested it. If a recipient asks to stop receiving e-mail, the Customer must not send that person any further e-mail.\nCreating or forwarding “chain letters” or other “pyramid schemes” of any type, whether or not the recipient wishes to receive such mailings.\nMalicious e-mail, including, but not limited to, “mailbombing” (flooding a user or site with very large or numerous pieces of e-mail).\nUnauthorized use, or forging, or mail header information.\nUsing an ISC or customer account to collect replies to messages sent from another provider.\nUse of unsolicited e-mail originating from ISC\u0026rsquo;s network or networks of other Internet Service Providers on behalf of, or to advertise any service hosted by ISC, or connected via ISC\u0026rsquo;s network.\nWillful failure to secure open SMTP ports so as to prevent the unauthorized use of customer resources for the purposes of sending unsolicited e-mail by a third party.\nProhibited Actions: Usenet Newsgroups Posting the same or similar messages to large numbers of Usenet newsgroup (“Newsgroup spams or USENET spam”).\nPosting chain letters of any type.\nPosting encoded binary files to newsgroups not specifically named for that purpose.\nCancellation or superseding of posts other than your own, with the exception of official newsgroup moderators performing their duties.\nForging of header information. This includes attempting to circumvent the approval process for posting to a moderated newsgroup.\nSolicitation of mail for any other e-mail address other than that of the poster’s account or service, with intent to harass or to collect replies.\nPostings that are in violation of the written charters or FAQ’s for those newsgroups.\nPosting of Usenet articles from ISC\u0026rsquo;s network or networks of other Internet Service Providers on behalf of, or to advertise any service hosted by ISC, or connected via ISC’s network.\nFailure to secure a news server so as to prevent the unauthorized use of customer resources by a third party which may result in Usenet posts which violate this policy.\nAdvertisements posted in newsgroups whose charters/FAQ’s explicitly prohibit them. The poster of an advertisement or other information is responsible for determining the etiquette of a given newsgroup, prior to posting to it.\nComplaint and Enforcement Complaint Complaints regarding abusive conduct may be reported by FAX to (650) 423 1355, ATTN: Policy Review Board, or by mail to:\nInternet Systems Corporation\nPO Box 360\nNewmarket, NH 03857\nComplaints will also be accepted via e-mail, so long as a valid return address is included. ISC must be able to independently verify each instance of abuse, and so each complaint must include the COMPLETE TEXT OF THE OBJECTIONAL MESSAGE, INCLUDING ALL HEADERS. Please do NOT send excerpted parts of a message; sending a copy of the entire message, including headers, helps to prevent misunderstandings based on incomplete information, or information used out of context. Full headers demonstrate which pat h the message has taken, and enable us to determine whether any part of the message has been forged. This information is vital to our investigation.\nEnforcement ISC may, in its sole discretion, suspend or terminate a customer’s service for violation of any of AUP at any time and without warning. As a general matter, ISC attempts to work with customers to cure violations and to ensure that there is no re- occurrence of the violation prior to terminating service.\nLiability In no event will ISC be liable to any customer or third party for any direct, indirect, special or other consequential damages for actions taken pursuant to this AUP, including, but not limited to, any lost profits, business interruption, loss of prog rams or other data, or otherwise, even if ISC was advised of the possibility of such damages.\nMiscellaneous Modification of AUP ISC retains the right to modify the AUP at any time and any such modification shall be automatically effective as to all customers when adopted by ISC.\nApplicability of AUP The actions listed herein are also not permitted from other Internet Service Providers. Deceptive marketing, as defined by the Federal Trade Commission Deception Policy Statement, is not permitted through the ISC services or network. These rules apply to other types of Internet-based distribution mediums as well, such as RLG’s Ariel system (a system for sending FAX-like documents over the Internet).\nISC Is Not Responsible For Content ISC is not responsible for the content of any USENET posting, whether or not the posting was made by a customer of ISC.\nRemoval of Materials At its sole discretion, ISC reserves the right to remove materials from its servers and to terminate internet access to customers that ISC determines have violated this AUP.\n","permalink":"/aup/","tags":null,"title":"ISC Acceptable Use Policy"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQINBFwq9BQBEADHjPDCwsHVtxnMNilgu187W8a9rYTMLgLfQwioSbjsF7dUJu8m r1w2stcsatRs7HBk/j26RNJagY2Jt0QufOQLlTePpTl6UPU8EeiJ8c15DNf45TMk pa/3MdIVpDnBioyD1JNqsI4z+yCYZ7p/TRVCyh5vCcwmt5pdKjKMTcu7aD2PtTtI yhTIetJavy1HQmgOl4/t/nKL7Lll2xtZ56JFUt7epo0h69fiUvPewkhykzoEf4UG ZFHSLZKqdMNPs/Jr9n7zS+iOgEXJnKDkp8SoXpAcgJ5fncROMXpxgY2U+G5rB9n0 /hvV1zG+EP6OLIGqekiDUga84LdmR/8Cyc7DimUmaoIZXrAo0Alpt0aZ8GimdKmh qirIguJOSrrsZTeZLilCWu37fRIjCQ3dSMNyhHJaOhRJQpQOEDG7jHxFak7627aF UnVwBAOK3NlFfbomapXQm64lYNoONGrpV0ctueD3VoPipxIyzNHHgcsXDZ6C00sv SbuuS9jlFEDonA6S8tApKgkEJuToBuopM4xqqwHNJ4e6QoXYjERIgIBTco3r/76D o22ZxSK1m2m2i+p0gnWTlFn6RH+r6gfLwZRj8iR4fa0yMn3DztyTO6H8AiaslONt LV2kvkhBar1/6dzlBvMdiRBejrVnw+Jg2bOmYTncFN00szPOXbEalps8wwARAQAB tE1JbnRlcm5ldCBTeXN0ZW1zIENvbnNvcnRpdW0sIEluYy4gKFNpZ25pbmcga2V5 LCAyMDE5LTIwMjApIDxjb2Rlc2lnbkBpc2Mub3JnPokCVAQTAQgAPhYhBK4/rHln EexZ/AB6pHS7a5pMuz04BQJcKvQUAhsDBQkD7JcABQsJCAcCBhUKCQgLAgQWAgMB Ah4BAheAAAoJEHS7a5pMuz0476oP/1+UaSHfe4WVHV43QaQ/z1rw7vg2aHEwyWJA 1D1tBr9+LvfohswwWBLIjcKRaoXZ4pLBFjuiYHBTsdaAQFeQQvQTXMmBx21ZyUZj tjim8f9T1JhmIrMx6tF14NbqFpjw82Mv0rc8y74pdRvkdnFigqLKUoN2tFQlKeG+ 5T24zNwrGrlR3S7gnM47nD1JqKwt4GnczLnMBW/0gbLscMUpAeNo/gY4g0GV/zkn Rt91bLpcEyDAv+ZhQZbkJ49dnNzl5cTK5+uQWnlAZAdPecdLkvBNRNgj/FKL41RF JGN6eqq3+jlPbyj9okeJoGQ64Ibv1ZHVTQIx5vT1+PuVX/Nm0GqSUZdLqR33daKI hjpgUdUK/D0AnN5ulVuE1NnZWjVDTXVEeU8DFvi4lxZVHnZixejxFIZ7vRMvyaHa xLwbevwEUuPLzWn3XhC5yQeqCe6zmzzaPhPlg6NTnM5wgzcKORqCXgxzmtnX+Pbd gXTwNKAJId/141vj1OtZQKJexG9QLufMjBg5rg/qdKooozremeM+FovIocbdFnmX pzP8it8r8FKi7FpXRE3fwxwba4Y9AS2/owtuixlJ2+7M2OXwZEtxyXTXw2v5GFOP vN64G/b71l9c3yKVlQ3BXD0jErv9XcieeFDR9PK0XGlsxykPcIXZYVy2KSWptkSf 6f2op3tMiQEzBBABCAAdFiEEFcm6uMUTPAcGawLtlumWUDlMmawFAlwuSqAACgkQ lumWUDlMmaz+igf/ZW8OY5aWjRk7QiXp93jkWRIbMi8kB9jW5u6tfYXFjMADpqiQ yYdzEHFayRF92PQwj81UzIWzOWjErFWLDE2xol9sP5LdzeqoyED+XTqKggpVsIs+ Lq672qnumQoZKp1YGb8MDocU2DNg/VsMdi7kCnEnPbcSuBxksmxGYomusXNrAF94 1OJ2sqd9BuFamLIyn8XUCGGYlsvMoe4kTCg6Cc1sQvx0lDG8urKN57jBKWbP4alV +JBV5KQcf74gzPmE3ypgY1tMEwxyH/WyS9ekDbai0qauX6eUAsM1bduH8fIcknLS Zl5hrJTrzWFF9/DKOth8QOwhJ9zoIF1fcAsx9okBMwQQAQgAHRYhBHpqR7X54SM6 0lUrXL2X3GOe6MR7BQJcLktcAAoJEL2X3GOe6MR7jwEH/iaolMeno1oeWAgzN6Mg bx3maweh/9Vqty1fwk7Crq1G78X5i1OCkknEL2p0Bfle4ApwcC4HZVcqCgoYpRV3 /EEXtwkMNy3plWdBbLCQSev/E1D39GzgAHiMnv7NUJnkoJbvMrvrAiUTXPTtARMM gjEpvgEs60wuJxS8ESomRhe/KW4myxDoBxF+K+e5bOkOvvWVcAYJHWZ1BIZs4n6b +C2vO8q5aKTkQ/XvNT7utbTOqj1SGhItRaAQKXHBdzkQ1Et3wTA4+uRg4gK12624 9LperYs26w9X9UzApl+qVxQhtWUw3tnUXMastDfQrRcvJgq1xpv++OqX5Uc93RTf SNWJAjMEEAEIAB0WIQS+DpdItxglOii7if/xsRvwXPAuVwUCXC5LlQAKCRDxsRvw XPAuV29KEACEwlTVVKe4gnBYHnlAD7csoQ0+gJ6C+Ofzlw+UItRIcFeVCAknSGBs NPxr9JStIvKpmsbSKpCNUEAYnRP2immh94y/C6BuTe1uUUmqBGr1f4OAUwZpmI29 ixYeY/uUs9FZO3bS0/WtG46tdcJK41qtM0DYAGT3oeZhJMTW15dfvMGlFukauSOU +BbR+6sZhqdbWl/AOTE/6x5otnAaW0GObY/BW240Xq/KTgBrzVdK5qNoYsMVsiTd 0im0JKvFG08ED+ZfcILhlO6G9jRhoTkhtYuf8CKN1dPf2IoB5FrRFf0xqRr9hNlk X7ViNMP9OPb8i3BubWvRi5rNSquCwrFATSiAgaA9Yi1BNzQsmQxOql9lsh7eCH7m +8zzUg9umWI6PkSv8vHBo2kPX73wmtEsF6vxJlk0yDBuQw7y0uuKh406tEEk4cP2 8U4baq+ihpioupDhNuEII1h1Eh/RBE408RAOpcr+2F0m/fKOoJyz7u+AxyV81Ia6 fyBnUfZnlfKo16w87c1HJRs9dKkRa5yGziBf9TcED3sru58Pftes2Nr80/iOh26i P2pRihcIyrmeAqDWnneErVCmPMDTe6zkMrm/0iZ25/Jfq+M8IHEzFEw3Y1FBOeFg 9TyMDwYG2biJPTNTDO0BQ+Rrvs4SjFWEYSxgJSvG1jMfSPt5AR6MJrkCDQRcKvQU ARAAufZX5WzJr0lZAhxaGpHY6JMBr4jVOCP4TrDZhwC2K4CXNM/PLLNisWzquiWa FvUDhB89kCxrEhipwVFYhBr16CDQxrr8yhah3RIxrBMYhRTxgIAkANgkhGWfDJSE zXauA7krYtS3rYwhfXe4cNsTkLPbnMUlyLJcqj2wnZcZIt97aL+NFRPyfIw1KfUb 9u3tB9seDYbvTEULeL07aTnHpWM5f3bTwJrJ2OFPzXseCCzPiVNh3Bv+YtJ1pMTr c/UHO5DoJuHLsF0wicPSrpD0twspFdR/0rT6eNycsaCtV4GQzBcMPvY7qai5XrZm Cqgluo1W6l6+F5YrKvRMtyyFkUNGcPywdjSlP44JyRrS2uzvFUViSsJArcmFG2TJ LCohnse8wqjw0dIUVbmDbE4zjaG56zkvu0k+04Wwp3XPgOZrbl6cbhX3yLhu/Gt0 dzd9EReoNfKXk32hBzKas/vdeB5DZejbOOOWYftqyZC1LvDvvrYFhFK6VGozfZ6L Fml1hzn+xPahp5tRv93/T9zXeVPm9zilGMqm/gjRgh8ojWxNQoNzJyqTPWIvWmbu EIP3T3cTFq6lJpJsg3+sfzofGWZCGnBZQGqm8rEOoUWiaKe1BvQCX1x8p4/x8/tX TaVDpQCGoqxXt09plkDuGMuiDICxBlaHWUR2jLoHc2cLrB8AEQEAAYkCPAQYAQgA JhYhBK4/rHlnEexZ/AB6pHS7a5pMuz04BQJcKvQUAhsMBQkD7JcAAAoJEHS7a5pM uz04pB8P/Amfg54IFeALiPOrKbjC3bVAQzrsf09IL8sUln/LCZIx9HgGAJj/f35S Q35sK2ucjWiDX6qCxVrWmC6caQXFgXOFSKIlqladmmgj4sIdLM5wj4nbomHChpB5 rqV/GgkFwWBQ3kPCatXvc8Bg+zKJ+wXgTuPFXefyE9R+SLuas2grQ9hAjvTGHYbq iYxSlNDFc1aHLAQ3bS76351MHuMHOpLzoB0OkZDCVNW4GNEqrLbINdr50RAK+Loo Z2UBIobEZjXYor9A2FWkSvdjyz6X1QKMdQMath6R91k/O0abBa7ly4/805eAGXM3 w1Xf2eMlpiUs69BeYoJBklK8aNMntpDREunJjhiPU4JoDzSxl5Qv7LuXylyo0YJA 9YmydKhTTcRdwsKc//nGr/ckg4BRl+VbtJBYvd3xGB7IQ+pT/TOakv9qCospAhr3 EQjVP/XpnWJRd+x+dq8UXqwWmTenWDE42cNr7BDFJdOqS5ZWy4sIz4sdjpSxXMB9 8iiRtKSpKRCJgXScB7SYebh835EgG2YyQGdhJMO7C6ok9POYQBqL8sBqRzImJKoT VDvOH42WArKwJWTHa4mPdiDHEIZlkONerec3JXtl4Mfv8cwZ5Lb8fSiB/x8AWvqs puc/7hQtkus4TcgutS1fwhAwpnFItpVF6+73CMQrJsblBdTjW0T+uQINBFxbVHwB EADebZOJbhPdhHeBPdlZYE3rRjB8scDpWdjrCupfmeTC9MM6JgCE4DEMBtBXk+h1 +7wfpblYYNFwGVFvytG5nvGRDtHWxwd1Z9O8Fx4Zqu0Fx/wAn7ZL3ryE+tdHR7JK 7SLxOa2X49T/8LY0U8Q65I4ZRo/b4VMcXApCmncw3QSRqHT/mYdNnf+HHPvi3jza md3iVptCS4Iaisc079DFda+htWXspBc13lmPi2vGQkWjjS3B4yO8JackyQPVhpsg KYbRBzOH0Kii8bXmyA6O5uIJYEddp5Veged4FE/ej3CrgGP1D0Yk1epx8lLbi9RB kwFS7DA5rQ23UnbSy1WyV1ZgPrWqQAWuGpjMTVTWN0ElI3AGxAnE8lZlSXyE+XyV uHjjIVrayBjLKVqDuSLdKZeCvI4QsyHH6F0NKJQkngvXxLZYxO6s0c2EFFLzdVWT 1V9GMP8UsDrrb+JsZjUVmPR1tTP4xqEQG6KjfFoQm5XWpGtFwh91OK1lwf/Bx2/C j+PquLLFcj7hEP79VDTUZPQAduTTxIeTzHXH+x1PCHFB10xxH3e82VSdJeBUrJxn riXzK50SKTTmF+uYpHqE8Jg1N2Y1n5ksuxeYUy8PFjhAeBCqZ6ZcldUDf4999e/z PT8bwfCDr8jRdqJHrq7RxTJiP5RsMudWpKeohzJGwQ5uZwARAQABiQRyBBgBCAAm FiEErj+seWcR7Fn8AHqkdLtrmky7PTgFAlxbVHwCGwIFCQO9IQACQAkQdLtrmky7 PTjBdCAEGQEIAB0WIQSVztolaxygoV8wL7WVIaftXazpGAUCXFtUfAAKCRCVIaft XazpGPeMEACm9nxA/VKf8RxDo2ZuTgyuSwlR8tCjAE4k3+UoiYUbamkW4pjx9Vgd 1zC5bNxSWZ5vlJ4CH8ArKFqNK5LBVDZqhYureAo/1Af2b9vRJw0/QQHhuXz/jqeT wwrLuKpy796Gpt+aFfcmS0ZC4QXfxJERhAP6tu1p6YmAsSb+bjziQVkKrt9mhOrL dtz6WP0Fg1joRj33FgnnLtayHvtgQrNFI3ztCjk/B2FjYZxqbBGfk5gyo0cTE2Fi oLhG/XrxIoZepFMJkGYETnYQXrOt2KuJLvawV70YQmG8EqHYY8drKA0XDZs8TVdT 5cvGvtm8ERz5znsssRBxQMI5Ml6O2ahrXp8Eq4htCzlvO8t2MOtzvqAJRiyAd6bA Uo+MGVRpnvePOR1SAgBXCd416rF0iCXc1utZxnqwdq9kJAZ+8mCLx4N4jk6AdGpX zcNkLg7QmUzXn75RxZ6GrIUYZJNMlswXq5XhSW4o8ePlaxWjh9+QTtU964AZhpA1 uoHsKGTBxHJs0w6McZm14kb2PuaO2/rpf8s8IZyc93+Y5O/gHZ6/agBjA9qN6wkQ R1d5UhJC4QS/m35rBGBKK9X3fqQxaBCio6Qz+m4A3GchrztJpq+2P+ma5ylsTq5j V4njky26WNtrV7+N0C4Moj3I4Qn6YU/eSManTXzHzoiPZCEH/IOxgXIiD/9Zm3Zz I+h4NCfSGyP11/w1gEzlTHQ4at/FXIIDh0Y2ZNpWPffuFQLtcER2vyKPwhDYpGMy NNHXks4azfrXVCv0wmSNBbeS8pJrYtopZpCEBrAbg/YLv9m5lpDSRHaR3gv/qMZ7 QxY+NwqciqTwGq68PuF4mDSvtfuFmbEES9Iybiie+eL/6DU2knfBjgshUe6vElR+ LYoPQ45GY2IxRTJ1pMXaZw1+evwH3UvseRGkRygiaBgoU/qR4prynvjMQcacCa+C aRnXZJYp/usVBeY0xut9toc9/OcLGoBr5h9l5YjruO2vu8VHou8N0tarVQn3YbQR Fi+YtNtclWJa8Pq1AsKRTCFwDwP6eODv6mNOrEFydNRcpiQmzp47VWF/YHRfHzCq A1wHLxLUrpQTaVw6J4FqedAQ31aAO4faA7MS+ZMNBqZCZ7lTGC6TvojqqBAN2yX7 AnnYpZHM+lGpi2/ukVzLqSkGmdNOgbu+UZvoej3YnHYig4yWP+z2xrlJl8bkhU/d r9IQE5aRCEPB/JWhHJ2/GqYl9qjshlB52+6X2KDarwptOtzT9ooArYhpMwKIYh34 c7X8tlAKYk7V5j7txIRFDKKAftC7dM82PntXJxSkWyR70GYnYjiXyrqqerqT7xIC mDEQgFOPpy09zFW62paO9uiZw6qwybwqgGpoX7kCDQRcW1TbARAA3ERo2mPv2VVg ZUFr4MtPDm4UG00YJW/LYa3D3k0e9tdSScACXprk1sAoxUlQx/CSdErPKwXG4rax iN4t5nICUUNYSC0dh09G25jC7nwsWc0AYyZu+h/FzfvpOm3fBwmBlzILlGh0URwH Ffj9fHt6hos4C+3PFZZ/X24aMJF/cov1oYi9rqFwt/l0mgtPE88Iyj2/Vp3Lergg QMzKfEuyluj9fL2cgU0Qa7oAPXmaxhHtua4cvbM5SXGo3FXjIgzH9OfM+2orebeN wH1M3ec6w+nPmRmCJLvPKGOeS7GVXL5/aOyPlDWzSXYnpCKS2ntw4K4nt0IA8n8z 1db109l/C2noDrDSJEqOo843ShNGTYOMVUrj3a+Y7o2ATc9pNZalf0PwnKas7NDb IJ152PEQw665iYXcv2awjLF6W0yuSq8kfiaAxIrsie2Dto0zgqOs0Ot9Y74u11Hh wBSHUO3mEZJScAAcI/yDF2PvjvCQSzu4mdXb77t6X2O6YHULz4A7bVQCMazcTDI9 /S0W2+ixPnnJVnE3xgjK9zuizji8JDJw1hJCQM+yTLVqq9pfvcRfQ6uwpMRzz/O3 S0zDRiA69/GyfNwkpgz5QaGpY02IK5WrQU1doRjIz4BHAYzoIOkMkRqTtjdElQZw /D3wSO2uwsEMNwRzibR/Lz1JF2aGn6EAEQEAAYkEcgQYAQgAJhYhBK4/rHlnEexZ /AB6pHS7a5pMuz04BQJcW1TbAhsCBQkDvSEAAkAJEHS7a5pMuz04wXQgBBkBCAAd FiEE1wyE5ktVjlvM7AchMuIXXx11eioFAlxbVNsACgkQMuIXXx11eiqCfQ//SFDf rOIEoslp6n6vlCuavOg02wvjskKQGP1P1Q4v40Fw1Gl87n9uXAoMpeF4H+pzUxOi BHYCQi+EemwocSThzaWfPzd3JG/0OcRymf+ZOcBb+58VJL7p88QdMFIAi5J+KMuA fEG0zLkc9anEnXoVMmQJX5K+6PyeVDvBbYGjLjQAsWTZTiVuQI0w3WxFtDGWqQII 8e/qE0DA7c/auGn7j2hid308+FcdfpmLefW9YesWjE1yYvHoCRdFOJ/7Sft4MQCI Re7UET3TRMBvtisP2DcqyzGPp22s4ZYFCCJJNiB92bXdEl5zXe4Ff7JTfNE/QrR7 Wg5R9hZHgHdbp8p8bA3f0y29YCx3puYg7BbmQWiMh3rXWE5b090pSpw0K9BQU3vO irr+5/2TaFOJXHl4VF03GrWsSncShCbdsdRIv4TB0lY2mN4q+e7bjlAzJJeoaS97 GIqu3DBlAJyx/ZwWW23DXXwoQ4jNuJhpl2jaCE7rVQB0uLjbp0i9Zdd4SdYZxmO/ Y+JfgoJz8eyx8wZi4eDz1ijN0WKsIGjxJH5VUK9STjijDMeG6ZZRLc6b1QCGhe97 ZbDkEUTdQGoeu4L5Fiqoma13NEsf8ofBDv+myJm/O67Va9JI3gxhIrhmF7LMzQQp lYx2peZC1CmhEnn83dtt83mhXvX6Dth657BW/Qd+GQ//SVuTPuNkBXfrTi4dbnv+ cU6IsoIBodTF/WsQ6h4kbtsPhO5DbrsLNuNumrqVEN8jw+HUsEeNvFNeMrTPdG2V 87ShQ4BQGkCf+GFRBj0myxxXOFZYQx6RpY5fCe7yOcTzpkbnPWmm7V8HdOuZ0NnL JNQ5YogOI6UvXVKv35R9qBo+G9jkhhb0eaAu6BERzKVANKfsGN7545ElZ1qlffMh AQhXGb6TsvCeSg2cWGb2cnVL2d58uVukD4PDiq4qqwgClkF3bOO70SIgGrCteHbi 4Hseopex5m6GqqjoUYXr7QQBwSaQdc+gKtEjMHCsHbUyHRk0qEHdEe+2RmL0d0ra QMJfKyYQjcCR7tnrgN4WD1h4NKRdC/KRW31MDmH9XVPrkOMQCUCnArXkOwdKWsKf h8af9HqweXOT1FHJN/M3tWaBpv6KoduF2f2pj1VhPZ2EqFUycJ26lrHyOpsynQR6 +TD+c1uXotDwKN5RW+YL1cydk6mhib64fdOyPUeTcHehjMAFgM2f5wi35Ujcj8id 37cWOqRsggSbMnGO4AUA/YtcVNG8TjZbakson8ENK7e8q4sEiNFUZ7/CtzNokwHQ 5uOG1+qB85Y4ImGnIZVeiBpjt73VVawg4Zvm/omtW50P9R+4rVhMJZZFAgrWg8BH H/KNznW0vUuShG8B+2FA/eu5Ag0EXFtVDAEQAL5ftI1GgVJEFgX5VsuFnfBnH95c zqmwEXaTP4s7Xm3O0Wy579EzRUD1eEw/UaD/q2OHScwvMP65cZYQ9w4hnCN6H96P 96Teo7LOMCssvSXIO7gqP33LKTqDzsIoAFHwWE3dq1jbyP6T1Je85mr0Edvk8kOC B1hudswAARno/7X9zGulhhwuEHk5Iey7R59yRUQqBctdNcetGyaiFjjX0evuVADi /z/s07XhDLDt7+3Vglh1/7XGC64QhB9QjZ8j0u7+0xfmLLjhi+7EpkDlAHIJXX1H 0wAsPOGKlYruQUmIsMNfBINZeulHEBZ4cAd30xsM296DzJ6QL9sAGfYMhRs0YHB/ EJ10Zv0iw1pU2jCCUv/9Kf4F4nwgHQWQP7JAbfhOIUOUq/YlxjTLnkd25+7vD3KH NQ6UiRDROR9Jwetpd/zokpf5O5iTBpVL+sCq+NsTZyDOjITve2sY0V8v10M+Z+pL cp/cUZ4JEDS/WJ4/ovBNJP8b+YwN/RBgCjl8UBX/N+e7AA52eYP2H9GK9XPkzSCE VxEf5PyjGrwedpoLkzagrHsDuWo3uBquLyneT/ozihqKQAuInUy5B7rWU4mpKHe5 Vto5o6Zuj+6MgHgIQzRK6Da2ziMNEmroxwZibcYCtUPdvcvxGh+byclnzBclKjOw kAalFPx0SxEbHmzPABEBAAGJBHIEGAEIACYWIQSuP6x5ZxHsWfwAeqR0u2uaTLs9 OAUCXFtVDAIbAgUJA70hAAJACRB0u2uaTLs9OMF0IAQZAQgAHRYhBK7WIv4CB360 tcFGwUKiedJIzcMQBQJcW1UMAAoJEEKiedJIzcMQH+cQAIQYXDnqi4Hl21LtAgky pZxug+x/LECVlwkrIfaQF337+fG+H9J7SdU87Sn1Xe/YUgQnF0XP/fjIVFM0e/Tb xVlmTFqiejLnIwJJDgUaHO3POT2sGEyO3tc0mqSzyRBxtMQ8yvApccBhL5QODv3h hlRWgk5MXU0IPeXw134IWm+o/PRiPBoXPawvVfEVIBlUFaiSZASf4BAiSad4aJQe P8PyP7FPvQB1xiib0iSetn6ZmNeN2OSUJPiPA8aE9JCKuFtomVQEDM0BqQDl5A7h 5O2uyf0Li+/ArqBvfBjrH03e5zbID02dO3D2BjsV3jUeVPQ5WDgVg8LH+nfg/rRy wfCsx9zFp1mt3K4xN2v7IKwxGndApgCcx17gsjzMvLz0J7sSGov4MNjzqvGEDKCl uUvNKXqy7je9xcQLpoyvWtoWFXWTbQAcK5Vv+hC67r9bHpjI1KuqA8hYqNKxsv7s wiLZdd4SK9SIuwf0j8/XTZwmoFfGolJil0ZNxyqBF39+CMVpaHdLM1qKZz99TVzS h4obOOjkUjK458xSo0XCbJ4qXYp7PgxyWK6GIbTozbbG/1ldw+LUnqxt8Shf797L J9lbI3ICuR2P5PYlKJf3b6D9GyfqyrP387fKAKhHsYkZ1XD54/8wIgTrdfeNPtL0 1mjWDjw5KvO9kuPBjcmzgt+NrtsQAJwKeZsiqLLcY8kJ9xP+/xtTlh2iVuZMfxwq hwlo4MMCzpobLDZ/JKU398m77eboTKJSBfeUYxQd4ATn1L8NLKjLxKAaBkjEk0nN 8w9OUQbFlhQ/asLzzF7Z9IGGh9/SEgBZ8V67a0O3Qw9Xdi3ARK3bbZ8RIVJ0+P9G CGrfq9j4ZmGA2L4irLjsvDAv7CSMb4WBKW8j0Jz5LFMwOMJgG1TT5c6lNqFj6y09 rZcVLnt8+lUv2Bw3LC0oI1TjFkrrCzIdfg++mPi3K/ZFc50bvnWF4eCOjgZ5U9Vb sxFZq3+vTRcIfI9z2lZ9CNDRA1O5jGvuVtEGLiSLF2aJ6kiNriLuuGTlXfg/Fpgh GTvyppOTzF7PtHzHBQ/ZjnhWojnc/jyJRwLK8cCl6+EOc887v8BDmqgFWtmycsE2 5fDJ7UFGP13g/eDL3ZUgMDty5dQaUOTX145t2KT+lMqpY6ZK2EC+eoqrnIGJ+tYy 0l4RRxi10mbNhuPIIDdph7X+mUHgCeA9gyF0Y+LqiB6CX+zFg7ovLvnCbMPxdGXq z7AjfwqZBKI+BVuBeDtyW4onmElCu5cXNKsg3W0IlQlZf9PMDU6Ht0XLUs7EPfbQ sH1Vqi1XE1W/tGnkmjcpG/qlt9Gx1uwFGLP6iomqUBc2c0GZ6R1xplXvd3w3yC8d 8lAgPGImuQINBFxbVToBEADkuxhQx9gxlzzCc0nUu2v82XsD+GzONp9irt14gslx te96eKaTXTi0t5eya0X5TIY3wbREwjlfAeM9AfcAmWcsM4izrfPtANM6WOxB2Tbz EY2cqv7NBQii7Z5aqPyjcIiT0b0Gs2evlDkn3xEBBqTSrNcnGSA29bZPIkaUb7Qo p/Ani0S3/tgcR21gXsJwkgpfNKwvPT03Lz3/o5rXAyag0M/25adgk9SVKNcXc8h2 HSGv5ENjwUKNNnowVbNLw4287mFUM2Vd6unGJ2MBj7aUwTrfBl7gNV96mMdDJWcB hGKYkxUvibuHCa2KH7gTrnV6X7sdrgD5CbJMPq6OZNSP6n6bUVg22eHxoETplFwT 4NvV3clRMWIAG1XgXR1l99LAh7PPnPMM1pHQGPwYHQskoBFS4g5knzHpB9h9TfZ3 MM4cDZR5NgWmE0fYVnWe5ax+wW0/IOklUoHv3qoL4yiN9wFJq2oLzUNQd9+tsqiy vxSTh8iYmHegyn5KuBPsrMPgvqiKOdalTZKkak9DOx4cGQL2qHspKxiBOb6uox2v fjMQ5bDeUn+4DYMdnZNHeywCUegJmDakUtlfvN+136IDHGwfdGcitqzswzd3+PI2 qlwPE19gkrp9NUaD3Qj2ZtDP7sU2cThc6Gra5KRFW8f98bI77j1Wu6pCnYFLqPz4 QQARAQABiQRyBBgBCAAmFiEErj+seWcR7Fn8AHqkdLtrmky7PTgFAlxbVToCGwIF CQO9IQACQAkQdLtrmky7PTjBdCAEGQEIAB0WIQR5HX64jryNAThDSqwz3zWa56YK eQUCXFtVOgAKCRAz3zWa56YKeSWOEADK8u03LESGSQlZQqnnCAI8iYs1s+XRMEnG 2tAQ1OK7/4eNgr1yZckmaW4FBMgeEgYIBJ7v3SlW7Hf7dE10TYPNGbP6UxVW8HIP rA4CINcGZXWWwpS374JNMS6A5eb6viuEgEMEi00jx0MmLvCMZKypmwXQUl5YJ5nB ytpQ1681mCQxGBMhT1eKQt3B4nAsoEnP+HnqVM/nKxBemSBNXX+C0b/YeQoLC3sD L+Z0NRI8U6PZl9Rokod3uynH0vfBYCEJd6MvsjtnJlVVaseYIA3ESNrFG12tw95I wKNrVCANZ1DBSyK4ovmmWsDrH+uFTHSLNjlxIuVxUfmXcLfgcepVCmd/7Z7UrWYr SXSvP0VG4ZmEPE7tNb8bfyADftO1cVsmcHBQeSrgvpSrTv9L8MocojpR5vJc1f+a sBT7rAeGzZP9riz1GmryXawaZgdLfaaJfzRQkc1uTChb7kMN+UMhVUdCAXmho0XO SfcsW84u/LpjdYh2Ww41xQO6EWvbZDNgD/Fdmp8Uh1MqJ1Dejri6kjNn6wPImXJd Eu6nHqWDRdYsfT4XUB18tB+4aIpFzCyIgpf7p1uaVU7Oqip5sZkc/WXKr77lV23m PQvpGRNCzgU2TJY7ktR3LOvUVN6wNfLMHzeQk18NdmcEGUrJ0YYtl9vE5/Eg9L6x LBH9PKt17IQ8D/9DLwQX8pl3fuTM8ZbzIPLxiXhbgzBBTXKRE2u1888+RIq9xE7c aVFjwq4qpgqZ5SFonTcG4Pi5ck3mFAzyA5zLRF+ckpmBpwSPMpLwCpv10369D1jh AF3JsUwt6DIb2BISMhh2ThSUMSKO75q8GSotsKjJyjD6vl1x4L7WXubTWxEiNuwD 3kAjFWS1Z1VWtA9SURWAbsDaCV4VmwCCpSIwRr9OTbyu9XuMdMxGNpl8SwW7MVQb x4aYNvR7Hl/wIR71AHAXoSfrKp3p12anXjYYASHmbm16ugP4H7HLMBfznKet2f76 gIxJr1CsAMTSqypcC1UoVb6Gz8djeIR+GU+6efHI4TIUMy5uMIUx8tYbwSEeo/y6 NnjpJFYYjJa671iSABInNxs4+X+1zrFa+wl45EnaFxziEet2Qzv/VsusoLvLwnYi BZckclAS5xoVGFW0WJ01OfLUDHxGMt9GSheL8c+GLMaMtaCWunpmmt9zZ9WdpBOu AGluMG1Cee50TrhXaGE8CdNr8nOdSeLNAveBAPmuVa0JDSe20/D/RuYJLKeG9Vsq BZvjuGlOUsfl6UjtiGRbgS9OWpxeez5ugc9yyV+rBGIpmnIb+9quz2HmGxE65eA2 cRNsZRIjFLzeAx/0RMaT1nlLFTBbUuZ+tJ+fgFtRGMhifZn1pb2dMQo0N7kCDQRc W1VuARAAv4LYaNq2Zev/v7M5DnxLpgHRcMkG7TOQpycrlK5653llpZzTy3mh5peW vcq3IDmdeUIJxQ+WDh2f0vS+NIKDC/HAddfHrZPbhO7zLxLcMW5KmV05ancaRSP0 s0+IyQmvVxUNrgPinZiphlvRGoLXS6pdgfc4jIR9B2umPecfvfu/6EWFPnXZgG8K yY3Z+mwrmEO0FaXHBQuu6nactiPe79N4bLe8hk9RW6yIxLBeJzIoOlIcJmuRHapt nS2lV3mfhZdFnkAp1o6a2TL5BwgMY0wZUKZr78HEMKh6LbPN9rPepf0neUeq/k1l NJU7V6XMS+rezF31vgSJ5KoNGYhxtWZ54uksH2rcw7+ltpSVtqY91G/vibpRCJG3 LdX/kxHni1NEWyZlpS/6ntuH6HSoNYsR9IMsbESs3QVCH74ApK88CxYCRB0SEo0M yAElbQ3bfEKCKl/FwC4IzAYAJ2arWKwBHRSJlsrNCtczrjG7j3EyJrn8+Tm5yjO6 0THQjvc/nBxrNE09r1Lzz7jrDWC9Rl+BH6wqdniymoYyUAQsX2rZ+Jhah1Zkf+Gu 76qtY+EH494dPM+0FazcBlgBd6/J5mh3Wk9JuecXLTEUGtzd1GmI9CENPAklCauX tNOWeTop27djuKWsZxuP1GyV6UYixFVOSWteyAbA32cncVv/2ZUAEQEAAYkEcgQY AQgAJhYhBK4/rHlnEexZ/AB6pHS7a5pMuz04BQJcW1VuAhsCBQkDvSEAAkAJEHS7 a5pMuz04wXQgBBkBCAAdFiEEFWiQaF6g32oTce8gF8xdsfAIhAcFAlxbVW4ACgkQ F8xdsfAIhAd4jxAAiO9+VRQQ3eBOsJRgANdgL/l51kq7qE3u8xnSqNkrmdYDdT2H TYH5W4n2AmGo50BDafdjd6tut0qtzA3/hGWCooydxKFOsnIYziUeoHvlICj3RkHO y7utcFhAgRWi+kzFwnnXGf13dMU9iG7yvKrCrCEw44gzoQ1KnY1Xsj18n5JkqxeT 94bzcSbz20OpOSIMfSQPrpy18WrZYwHodcIZ3IUUACCpMZdfTa9c/qHRQ/rcwl+B 0JlHx0V4AYiSAsiMVgflO1Eqi7apPuwxPPd5nnHkrdDM9CYC3LdBORBXwncG3oZ5 eTSXmsvFxHXH41JHsm/1QFcVmFAYhu9qJFCGiD+8UeTFtT+nnHU69BszgtUskqX8 k9PqLdK7Vxkp16wc6WOp1NeIQ6Fd4PxTGrPqs9bJk7TlYtTFWpA0X+EMj/San+Ku PxqLEa4Ab12R4vs1pCrn/g1z3C/6ujH4B70HOrRTIeTjULJ6xdwXGtwUA09hio0r pHhtyZhAh5irUJNto4ZOk/Qyd+dfMsNvRJfbVIK2mmeRaBnp902AsQNgYVdi2Aki 0h4kz3bVLGw7iD/xV2hV69+JwLSijkkmOpz/EjMwj0hDDYrHH3Y3o0dV3dNdk/5i 6lQgcxSVsl9kWlHcoEllKbf0Hb1muKVwoGGYxFYna2jsLFVjG29M7iPSgrHjmg/+ I3fmsLZ0VI9kmxniUlZ6gz5NB5PJ3RXmwKO9LkBgE5C1wpuZbNEQ1NsR2bprlJPm ++GNSo8HaheuTRJn42kkOgfIJwjuvXih3FE/NtRA/W8H2uF6YLDjBKGZJbxQcmsd CTEuCRCVP8X7C5n3rl1YqzfWfNr8QFxvH7ivG7KOlSxvyTKcYatWb9uDUPrnr74f ZaMljHGsNyKj70MzZcrrsmt61yWGR0h+02rmIKlskl4hkh+qF5ehI+Bkd7eblsBy rxEREHq/ij2Vd7l0Z606YCE8vj8WfcsJj8JjwR3A+nND/oNJTTbQ3b8OvasvqIey WqqmGg73nbHjd/VIAUsfvnsEYatDk4pAA/wQr9c4T4s5Q/QRwDrAsa4J89FrDjWC hQBPL7TaP8Af/3Y3/86jLCN4lnW1qjPXv5rhBFeI0EVi1k1qdV06qr5HOk7CwQTT uc4rCdFcEnw8kVKZa/yFnlJfRa0Z4IwSahdp5fdFEuad6LpOcFFnYxWtIWhcg4GT RcMha/OZnsfqOqiAt6In+1IwuJBz3uMM7xw2AMaxzAejGEL63F81C5iJ6Ld6kQK+ XblDW0G643bVbzkBb46MAT+UnLuWQUs3NDtk1FEioJyWUgbO/srMH4MoWM7rG8ZT nQPohNmPBrqL2phmE27HQsQ0rTjH2Z2ol7iy9OFMtT0= =MkGo -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/201920pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"ISC is a Small Business open source software provider; we author and maintain BIND 9, ISC DHCP, and Kea DHCP, essential software that handles DNS and DHCP functionality across the Internet. Our software is the foundation for most of the domain name service and IP-address assignment on the Internet today. ISC can benefit the US government through our support and maintenance of your mission-critical Internet infrastructure.\nWinter 2025 Update - US Executive Order In the final week of the Biden Administration, EO 14144 was published. Among many other recommendations for improving network cybersecurity, this EO called for implementation of DNS encryption, in all new acquisition contracts, and where possible with existing products.\nSummer 2021 Update - US Executive Order We are closely following the work pursuant to the May 12, 2021 White House Executive Order on Improving the Nation\u0026rsquo;s Cybersecurity, and in particular, section 4, \u0026ldquo;Enhancing Software Supply Chain Security.\u0026rdquo; ISC participated in the June 2nd NIST Workshop on this topic. Although the definition of what software is considered critical for the purposes of the EO is not yet determined, it is likely that BIND will fall under that definition in at least some deployments, and ISC\u0026rsquo;s DHCP software may also be impacted.\nISC development teams already follow most of the secure development practices previously recommended by NIST in the Secure Software Development Framework whitepaper. Many of these requirements are captured by the Core Infrastructure Initiatve Best Practices Badge Application. We already require peer code reviews, run pair-wise testing and code fuzzing, and use multiple static analyzers, for example. Our system for vulnerability handling is mature and we have extensive experience in that area. One area where we expect to need additional capabilities relates to managing external dependencies on other open source, and providing Software Bill of Materials (SBOM) information. If the final recommendation is to adopt the Software Package Data Exchange (SPDX) method of tagging software, we will adopt that.\nCan you and your agency afford NOT to purchase maintenance and support services from ISC?\nVarious federal, state, and local government agencies and organizations already purchase software support services from ISC, because they know they can trust us to provide the security assistance they need.\nHow can ISC help the US government secure its mission-critical communications infrastructure? One of our most valuable services, and one for which we are the sole source, is our Early Vulnerability Notification (EVN) to alert subscribers to discovered vulnerabilities in ISC software.\nThe BIND 9 Software Vulnerability Matrix gives a detailed list of all known vulnerabilities in current versions of BIND 9. ASN subscribers receive advance notification of these vulnerabilities, which offers them valuable time to address any potential weaknesses before they become known to the general public.\nWe offer complete support services, including the EVNs, level 4 technical support with hot fixes, DNS audits and DNS/DHCP training. As the author, maintainer, and publisher of BIND 9, ISC DHCP, and Kea DHCP, we are the sole source capable of performing these services for the authoritative versions of these software products.\nReferences pertaining specifically to US government open source users You may find some of these other sites and documents useful when considering ISC as a government partner.\nCISA Encrypted DNS Implementation Guidance ISC\u0026rsquo;s Capability Statement for more information, including our CAGE and NAICS codes. NIST Resource page on the 2021 Executive Order on Cybersecurity CERT/CC comments on the 2021 US EO Network Infrastructure Security Technical Implementation (STIG) Overview, 2 January 2019 (note discussion of IP-addressing approaches) Domain Name System (DNS) Security Requirements Guide (SRG) 01/05/2015 DoD Open Source Software (OSS) FAQ web page. NTIA resource page on SBOMS BIND DNS STIG, October 1, 2015 Google Group (mailing list) for Military Open Source Software ","permalink":"/usgovt/","tags":null,"title":"US Government Users"},{"categories":["BIND","DNS","DHCP","Training"],"contents":"ISC customers often have a training component to their support contract and should contact their account representative to discuss options.\nTo meet the needs of other users, we have teamed up with Men\u0026amp;Mice to jointly develop and present hands-on, highly rated classes on DNS/BIND, DHCP, DNSSEC, Kea, and IPv6. Past participants have come from ccTLDs, national governments, major enterprises, and smaller institutions. And for each course, there is an optional exam included for certifications accredited by ISC and Men\u0026amp;Mice.\nMen\u0026amp;Mice has recently announced its 2022 training program of public classes. Most of them are offered online, while a few will be in-person, in Europe. We encourage our users to take a look:\nDNS \u0026amp; BIND Jump Start This offers a concise introduction to running a BIND 9 domain name server. The course covers all modern protocol features such as EDNS, DNSSEC, DNS Cookies, and much more. Unix/Linux knowledge required.\nFebruary 2-4, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) September 26-28, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) DNS \u0026amp; BIND Leap Ahead This is an advanced DNS and BIND course, built on the DNS \u0026amp; BIND Jump Start course. Participants are required to complete that course before starting this more-advanced one.\nMay 2-4, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) November 7-9, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) DNS \u0026amp; BIND Week This is Men\u0026amp;Mice\u0026rsquo;s most popular course and is ideal for anyone starting out. It is usually attended by those with minimal previous DNS experience, and by administrators with limited BIND experience. Unix/Linux knowledge required.\nMay 16-20, 2022 (9am–5pm CET): In cooperation with Linux Hotel, Antonienallee 1, 45279 Essen-Horst, Germany. August 22-26, 2022 (9am–5pm CET): Hotel Casa Amsterdam, Eerste Ringdijkstraat 4, 1097 BC, Amsterdam. DNSSEC \u0026amp; BIND This is a classroom-style DNSSEC workshop course with lecture and hands-on labs. It is designed for network and system admin veterans who need to know how to deploy DNSSEC for their organization.\nFebruary 2-4, 2022 (9am–5pm CET): In cooperation with Linux Hotel, Antonienallee 1, 45279 Essen-Horst, Germany. June 20-22, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) December 5-7, 2022 (online, two sessions daily with lunch break: 9am-12pm and 1pm-5pm EST) December 14-16, 2022 (9am–5pm CET): Location TBA KEA DHCP KEA DHCP is the newer open source DHCP server from ISC. This class is an introduction to the KEA DHCP software on Unix and Linux systems, including topics about DHCPv4 and DHCPv6.\nSeptember 26-28, 2022 (online, times TBA) These dates and times may change; please refer to Men\u0026amp;Mice\u0026rsquo;s website for the latest information.\nCertification exams The Training Certification Program is accredited by ISC and Men\u0026amp;Mice, which is world-renowned. It gives administrators an advantage in their business, as Men\u0026amp;Mice certifications are recognized as industry-standard. Anyone is welcome to sit certification exams for the various topics.\n","permalink":"/blogs/2022-training-classes/","tags":["training","BIND","DNS","DHCP"],"title":"DNS and DHCP Training"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"We are aware of the Log4J/Log4JShell vulnerability published recently. Several users have asked if ISC software is exposed to this vulnerability.\nNone of our currently-supported versions of BIND 9, Kea, and ISC DHCP as published by ISC includes Java code or links to the Log4J logging utility. We have also checked our published BIND Docker image, and that does not include Log4J in the image. It is possible that third-party packaged versions of our software have been modified to somehow use Log4J, so if you are getting open source from another publisher, you should check that.\nStork users should be aware that Grafana, an open source application used for visualizing data from Stork, is vulnerable to a different 0-day vulnerability and must be patched. Grafana is included in the Stork \u0026lsquo;demo\u0026rsquo;, which should not be used in production. The Swagger API generator used by Stork is technically vulnerable to Log4j, but the vulnerable component is only used while testing Swagger, and a \u0026lsquo;regular user\u0026rsquo; is not exposed.\nISC follows a published process for assessing security vulnerabilities in our software. This is documented here: ISC Software Defect and Security Vulnerability Disclosure Policy.\nIf you suspect you have found a security defect in BIND, DHCP, or Kea, or if you wish to inquire about a security issue that you have learned about which has not yet been publicly announced, ISC encourages you to get in touch with our Security Officer by following the process described at https://www.isc.org/reportbug/.\nAlternatively, you can email us at security-officer@isc.org. However, plain-text e-mail is not a secure choice for communications concerning undisclosed security issues so we ask that you please encrypt your communications to us using the ISC Security Officer public key which can be found on our website at: https://www.isc.org/pgpkey/.\nDefects that span multiple DNS implementations If you believe you have found a security vulnerability that applies to DNS implementations generally, and you want to report this responsibly to a number of implementers, you might consider using the Open Source DNS Vulnerability mailing list, managed by DNS-OARC.\n","permalink":"/blogs/2021-log4j/","tags":["security","ISC"],"title":"ISC Open Source and Log4J"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"Well, we\u0026rsquo;ve made it through 2021. This was supposed to be the year when vaccines defeated the coronavirus, but it hasn\u0026rsquo;t quite turned out that way. But all things considered, we\u0026rsquo;re better off than we were a year ago.\nWe at ISC are grateful to our customers, users, partners, and friends for their ongoing support, and we wanted to take this opportunity to send out our thanks and wish everyone a very happy winter holiday season. Since our company is geographically dispersed, we thought it would be fun to collect some photos and audio holiday greetings from all the countries around the world where we have staff members. Please download our PDF \u0026ldquo;card\u0026rdquo; below and click on the icons! (You may need to also download Adobe Acrobat Reader. Unfortunately, the audio greetings don\u0026rsquo;t work if you view the PDF in a browser, and they don\u0026rsquo;t always work in the Apple Preview app. But it will be worth it, we promise!)\nWe wish everyone a better year in 2022.\n","permalink":"/blogs/2021-holiday-card/","tags":["holidays","2021","ISC"],"title":"Happy Holidays from ISC!"},{"categories":["ISC","Open Source"],"contents":" ISC is constantly striving to provide better service to our customers, and the most significant way we do that is by hiring hands-on network administrators who are passionate about open source. We are proud to introduce another recent addition to our support engineering team, Dan Theisen!\nDan started with us in early November 2021, and has hit the ground running. So far he has been primarily focusing on investigating customer issues that appear to be bugs, defects, or missing functionality in Kea DHCP and BIND 9. He hopes to develop his experience with both codebases so he can make useful contributions to them in the future.\nDan got his first job in the tech industry at age 16, when he was still in high school, and he\u0026rsquo;s never looked back. He started by asking for a tour of a local data center, which turned into a part-time job as a technician. The years since then have been spent focusing on customer support, quality assurance, and system/network engineering at a variety of enterprises both large and small.\nHe is happy to join the ISC team and says, \u0026ldquo;I greatly appreciate the opportunity I have to work with such a knowledgeable and dedicated team on software as important as BIND and Kea.\u0026rdquo; He has chosen to work in the open source world because \u0026ldquo;I prefer spending my time on software that is accessible to and usable by everyone. Open source code and open standards are incredibly important, because without them, commercial products would likely struggle to interoperate outside of strictly controlled corporate interests and use cases. I reject the philosophy that everything should always be about making as much money as possible.\u0026rdquo;\nDan adds, \u0026ldquo;Messing with computers has always been a hobby as much as a profession for me, and my technology interests generally lead me to tinker with a lot of different software and hardware in my free time. I believe I bring a broad set of general knowledge about networking, systems engineering, software development, and information security to the table at ISC. I love learning about the inner workings of Linux and other operating systems, and it is exciting when I can apply that knowledge to my work.\u0026rdquo;\nLike several other ISC staff members, Dan is interested in ham radio and operates his own home networking lab setup. We are thrilled to have him on board and know that our customers will benefit from his insight and experience!\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. We welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-theisen/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Dan Theisen!"},{"categories":["Stork","New Releases","Kea"],"contents":"ISC is delighted to announce the release of Stork 1.0.0. This release marks the end of our first phase of development. We have created a monitoring system for Kea DHCP servers, based on popular open source components (Go, Angular, PostgreSQL), that integrates with popular network and system management tools (Grafana, Prometheus).\nWe are rolling the first significant digit, from last month\u0026rsquo;s version 0.22.0 to 1.0.0, to signal a level of maturity and functionality. Most of the monitoring features we set out to implement for Kea have been added.\nStork addresses the most urgent concerns of DHCP administrators: Are any of my pools nearing full utilization? Are any of my shared networks nearing full utilization? Are my DHCP servers receiving traffic? How can I tell if servers in a failover arrangement are really \u0026ldquo;ready\u0026rdquo; to take over in a failure? Are any clients declining leases? Stork also addresses some less-urgent - but still important - questions, such as: What software versions are my various Kea servers running? Which servers are using a memfile for leases, and which are using a database backend? Where are the host reservations kept, in the configuration file or in a database backend? Which of the configured host reservations are actually in use? How many leases per second are we processing? And of course, when there is an indication of a DHCP problem, the next thing the administrator wants to do is: Quickly see the logs for that server. Stork Provides Easy Access to Kea Server Logs Browse the configuration file. Stork Configuration Browser Run some basic configuration checks. Stork 1.0.0 Stork Configuration Review Flags Possible Issues Download a package of Kea configuration data to share with ISC for troubleshooting. Creating an archive for troubleshooting More screenshots showing other Stork features are included in the other blogs posted with prior Stork releases.\nAlong the way, we have put Stork support for BIND 9 on the backburner, where it remains. We do have a BIND exporter for Prometheus in the Stork Agent, but we have been focusing our BIND development efforts on the named daemon itself, rather than on Stork.\nIn 2022, we plan to develop a design for managing Kea configuration via Stork, eventually adding the ability to make configuration changes via Stork. We have been adding security features to Stork, which will be essential when Stork is able to make changes in production network servers. We encourage users to monitor our development plans in our open Stork Gitlab environment, and to enter issue reports and feature requests there.\nRelease Model Unchanged Stork has normally had monthly development releases, with a few exceptions; that isn\u0026rsquo;t changing with the 1.0.0 release. We plan to continue with development releases, and we aren\u0026rsquo;t going to maintain a separate stable branch. At this point, we think users would rather we continue the current fast pace of development so they see new features arrive quickly.\nWe encourage users to test the development releases and report back their findings on the stork-users mailing list.\nUse Our Packages or Download Source The easiest way to install the software is to use native Debian or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/stork/\nThe Stork source and PGP signature for this release may be downloaded from:\nhttps://www.isc.org/download/#Stork\nDocumentation In addition to the man pages and on-screen Help, ISC provides documentation in the Stork Administrator Reference Manual (ARM). It is available on ReadTheDocs.io.\n","permalink":"/blogs/stork-1-0-released/","tags":["Stork","New Releases","Kea"],"title":"Stork 1.0 Released"},{"categories":["BIND","DNS"],"contents":"The BIND 9 core development team includes three people who focus on quality assurance. Given the size of the BIND legacy codebase, and the activity level of the larger development team, ensuring the quality of our monthly BIND releases is akin to the task given to Hercules, to clean out the Augean Stables in a day.\nBIND 9 Statistics Statistic Value Code base 680000 lines of code (approximately) Changes per month 50 - 150 MRs merged Core developers 9 engineers, including 3 QA specialists The Synopsys Black Duck Open Hub site rates BIND 9 as \u0026ldquo;Very High Activity,\u0026rdquo; which means the rate of change in the repo is unusually high compared to other open source projects. This high rate of change, along with the overall size of the codebase, make ensuring quality a particular challenge.\nTo get an update on what this team is doing, and how they are managing this task, I interviewed the team leader, Michał Kępień, via email. Clearly an above-average ability to prioritize tasks is important in this role: the answers below came back about five months later.\nBIND 9 QA Overview What are the main responsibilities of the BIND QA team? (I am thinking about release operations, packaging, maintaining the build farm, monitoring performance lab, triaging bugs, CVE processes, etc.)\nWe try to help everywhere we are needed, but our \u0026ldquo;official\u0026rdquo; day-to-day duties revolve around:\nImproving existing tools (and developing new ones) which help the developers make informed decisions and ensure the code being committed is not broken in one way or another,\nOverseeing the monthly release preparation process (which includes enforcing the schedule, looking for missing bits, polishing documentation, examining final test results, packaging, and more),\nMaintaining the CI environment (keeping the list of operating systems we run tests on up-to-date, monitoring capacity, tweaking settings for optimal resource usage, and more).\nHowever, we also help developers reproduce bugs, review merge requests, carry out ad hoc tests on request, and many other things, depending on current needs.\nWhat things make BIND QA challenging?\nThe same things which make maintaining and improving BIND 9 code challenging: the DNS protocol itself is fairly complex, the deployment base is huge (which means the number of use cases out there is practically unlimited), every deployment environment is different (both in terms of the hardware/software platform used and ever-changing network conditions), and there is a lot of source code which was not written with testability in mind. This means we have to prioritize to at least cover the most typical scenarios.\nContinuous Integration (CI) Tests and Release Operations What kinds of tests do we do on every commit as part of our CI?\nThe code is built and tested (unit tests + system tests) on several popular Linux distributions, FreeBSD, and Windows (where applicable). Some of those builds employ various sanitizers (ASAN, UBSAN, TSAN). Both GCC and Clang are used; compilation warnings are treated as errors on the supported platforms.\nApart from the above, other tools are also run to ensure consistency of coding style (clang-format, Coccinelle for C code; flake8, PyLint for test code written in Python), enforce the development process we follow (Danger), and detect the more obvious bugs early (Clang Static Analyzer).\nAll in all, about 70 jobs are run for every revision of each merge request. On top of that, scheduled pipelines are started for each maintained branch on a daily basis; these include a few extra jobs whose purpose is to either fill the gaps in platform coverage or run tests which take too long to be invoked for every merge request (e.g. some performance tests, respdiff).\nWhat other QA processes do we do on a release candidate?\nIn terms of code testing, it is not so much a question of what extra tests we run, but what test results we look at closely before signing off on a release. All of the tests run for release tarballs are automated and therefore also run (at least periodically) for preceding revisions of the source tree. At release preparation time, we compare current performance results with those obtained for previous releases and analyze intermittent test failures to ensure they are not manifestations of lurking bugs (usually they turn out to be test code deficiencies, but it is not always immediately obvious).\nWe also clean up the release notes and verify whether the documentation changes introduced since the previous release are accurate, correct, and complete.\nHow much time do you require between code freeze and release of a maintenance version? What happens during that time?\nIt varies case by case. In a typical release, it takes about two days to do the things listed in my response to the previous question. That may sound like a lot, but note that in certain months, we have five releases to prepare: 9.17, 9.16, 9.11, plus Subscription Editions: 9.16-S and 9.11-S. Sometimes we wrap up within a day, but then other times some nasty bug is found at the very last minute and that throws a spanner into the works.\nWhat tasks take the most time for the team?\nIt really is a mixture of all of the things listed above. While we try to make sure the infamous bus factor is above 1 for critical work, each one of us has their own niche of a sort in terms of what we spend most of our time on. Scheduling and prioritizing can be tricky at times because on the one hand, an innocent-looking OS update might trigger issues which take days to solve, while on the other hand a task we anticipated would take weeks to complete can sometimes be finished sooner.\nHow much has our increased effort at packaging added to the workload?\nMost of the work required to make packaging work was a one-off effort of setting up a build system and integrating it with Cloudsmith. It took a while, but things are pretty stable these days and ongoing maintenance boils down to bumping version numbers and applying occasional tweaks to the packaging recipes or the testing scripts when something breaks in a new release.\nAssessing Our QA Effectiveness Of the different types of testing - static analysis, packet fuzzing, unit tests, system tests, build tests, performance tests, security testing, and so on - where do you think we have good coverage/effective testing, and where could we improve?\nI think we are doing pretty well in terms of testing the build process. We used to get non-trivial numbers of reports about broken builds after each public release. This effect seems to have subsided in the past months and I think GitLab CI running on a reasonably broad spectrum of platforms, combined with pairwise testing of build options, played a major role in that.\nThe development team also managed to eradicate all known issues reported by various sanitizers (ASAN, UBSAN, TSAN) in BIND 9.16. The recent refactoring of the dispatch code opened up some new code paths, leading to new warnings which need to be addressed, but I am confident we will get these sorted out over time.\nWe are continuously improving the scope of our internal performance tests. The goal here is to be able to make informed design choices based on solid data rather than just gut feelings and/or educated guesses.\nFuzzing tests these days seem to have reached a point of diminishing returns in terms of issues discovered in existing code, but they allow us to sleep better at night, knowing that any issues with new code will be detected in due course.\nAs for unit and system tests, the challenge here is that writing them is a retroactive effort in the case of BIND 9: while we are writing tests whenever possible for new code, there is still a large volume of code which was committed in the past and not accompanied by tests. In terms of the ratio of lines of code covered by unit and system tests, we are currently just shy of 80%, but this is just one of the applicable metrics.\nHow do you evaluate the effectiveness of our internal QA efforts (e.g., do we track how many bugs we find in internal testing vs external testing? do you have a sense of whether we are finding a healthy proportion internally?)?\nTongue-in-cheek: as long as we are getting any external reports about actual bugs, it means there is room for improvement in our internal testing.\nGiven the above, I am afraid we do not do any kind of tracking or statistics. With the resources we have, we try to prioritize fixing problems, of which there is an abundance.\nDo you feel good about our ability to prevent the recurrence of previously discovered and fixed bugs?\nYes, definitely. I am more concerned about our ability to predict future problems and/or catch mistakes in new code before it gets released to the public than I am about our regression suite. Given the number of test jobs we run, even rarely occurring but known problems should become exposed over time.\nWhat about performance testing and preventing performance regressions - what do we do as far as that, and to what extent is this ad hoc vs. a regular automated process?\nPerformance evaluation is currently not fully automatic: the tests are run automatically on a regular basis, but their results are examined by humans. Resolver performance in particular is a multi-dimensional subject and there is no single metric that would allow one to look at it and say \u0026ldquo;this is unequivocally better/worse than before.\u0026rdquo; We are, however, exploring possible solutions for automatic flagging of drastic shifts in performance numbers.\nWhat are the BIND QA accomplishments in the past year or two that you are most proud of?\nThe most significant accomplishment was Petr Špaček\u0026rsquo;s work leveraging the CZNIC resolver performance tools for benchmarking BIND 9. (Note: this is a realistic test bed for benchmarking resolver operations, which Petr describes in this talk at RIPE79.)\nThe other thing I am happy about is that we have managed to establish and maintain a monthly release cadence, which is more challenging than it may sound.\nLooking Forward If you had more time or resources, what are some projects you would love to tackle?\nGiven the current state of the worldwide software industry, starting an alpaca ranch crossed my mind more than one time in the past\u0026ndash;\n\u0026hellip;oh, you mean for BIND 9? Due to the nature of our work, we discover new research and experimentation opportunities almost on a daily basis and each one of us has some ideas on what we could follow up on if there were time. There is room for improvement in the way we write tests, store their results, visualize them, and track trends over time. Sometimes, we allow ourselves to go down a rabbit hole or two for the fun of it as it helps prevent burnout and sometimes even results in new automated tests being implemented. But usually there is enough priority work in the queue to force us to defer \u0026ldquo;greenfield research\u0026rdquo; until some undetermined point in the future.\nIf you could order some new tools to be \u0026ldquo;magicked into existence,\u0026rdquo; what would you like to have?\nIf I could ask for a pony, I would like to have a tool that would give us deeper insight into how people use our software: what configuration options they use, how that relates to the nature of their user base, what platforms they run our software on, etc. Given our huge deployment base, it is challenging to assess whether certain changes we make are good for at least the majority of our users - and we have to resort to educated guesses. For some changes, we consult the community through our mailing lists, but even though the subscriber lists are of substantial size, it is still just a small fraction of the entire user population.\nFor years already, we have had internal discussions about potential solutions that would allow such \u0026ldquo;reporting\u0026rdquo; in a secure and anonymous way, but finding the sweet spot between \u0026ldquo;spyware\u0026rdquo; and \u0026ldquo;white noise generator\u0026rdquo; is quite challenging. But hey, you said magic was on the table.\nAre there free open source tools we are using that you would recommend to other open source projects?\nIt may not be the type of thing you asked about, but for bug reproduction and troubleshooting I can certainly recommend rr. Seriously, if you have not tried it, drop everything and try it now as it might revolutionize the way you approach troubleshooting.\nWhat can users do to help improve BIND quality? How important are open source user bug reports to our overall quality process?\nBug reports are always much appreciated as long as the reporter provides us with actionable information (we provide a GitLab issue template to indicate what we consider useful information) and/or is willing to cooperate when asked for more information (or experiments). Some classes of bugs are pretty much impossible to track down without extensive help from the reporter, either due to the platform used, the specific network conditions in effect at a given time, or other dynamic factors.\nPlease also remember that while we would love to make our software 100% bug-free, we have to prioritize and we do not have the resources to fix every single problem reported. That does not mean we do not appreciate the reports and users\u0026rsquo; cooperation, though.\nThis sounds like a good opportunity to remind people that kindness goes a long way in the open source world. There is no better way to make sure your problem will be ignored than through obnoxiousness.\nHow do you keep the QA staff motivated, and how do you maintain your own motivation in the face of a fairly large volume of ongoing issues?\nI touched upon that in one of the responses above: going down a rabbit hole that fascinates you in one way or another from time to time tends to be good for morale. Rabbit holes can be opportunities in disguise because you never know when exploring new areas of knowledge or experimenting with new tools (even seemingly unrelated) will make you more effective at your day job and/or make the product you are working on better. Every job involves bits that you hate about it - the important part is to make sure those ugly bits do not take most of your work time.\n","permalink":"/blogs/q\u0026a-about-bind-qa-2021/","tags":["BIND"],"title":"Questions and Answers about BIND 9 Quality Assurance"},{"categories":["DHCP","Kea"],"contents":"We are very proud to announce that we have just posted a new stable branch of Kea, Kea 2.0. Kea 2.0 shows the effect of wider deployment in service provider networks, with a number of improvements to promote stability and performance. We have also started to secure Kea\u0026rsquo;s management interfaces. We would like to thank those users who reported issues and worked with us to troubleshoot and fix, and otherwise improve, Kea.\nPerformance and Stability Better Performance with High Availability (HA) Kea 1.8.0 introduced multi-threaded support that significantly increased the packet processing performance. However, many deployments were constrained by the relatively slow communication between HA partners.\nIn Kea 2.0, the HA component has undergone a substantial architectural change. When HA+MT is enabled, the DHCPv4 and DHCPv6 daemons are now able to open HTTP sockets on their own and connect directly to each other, bypassing the Control Agent (CA). This eliminates the bottlenecks of sequential UNIX socket connection and the need to translate between HTTP and UNIX socket connections. Running in multiple threads, multiple connections improve performance substantially, in some cases by an order of magnitude.\nHigh Traffic Environments With help from our users, we were able to pinpoint and resolve some issues that were only observed in high-traffic environments.\nWhen Kea is unable to keep up with the incoming traffic, it parks some packets for later processing. The length of this parking lot queue is now configurable; a default value of 256 is used. Lower values tend to provide more responsive service with a higher drop rate when overloaded, while larger values do the opposite.\nReclamation of leases stored in some older versions of MySQL was inefficient in earlier Kea versions, causing the periodic lease reclamation process to take an increasing amount of time. The issue is now fixed, which should result in much better long-term performance.\nA new statistic, packet-queue-size, has been added that reports packet-queue utilization. It reports an average for the last 10, 100, and 1000 packets. This uses an approach similar to the Unix top tool, which returns CPU utilization for the last 1, 5, and 15 minutes. This may be useful for fine-tuning Kea performance and its queue length.\nMore Resilient to Communication Failures, Overloading Kea relies on multiple components to provide service. We have added more resilence to preserve service availability despite some degraded connections between components.\nA new parameter on-fail gives the operator more control over what to do on database connection loss. It has three possible values, which govern whether the DHCP service should be disabled and Kea should shutdown, or Kea should continue DHCP service after all the configured tries have been exhausted: stop-retry-exit, which indicates that DHCP service should stop, attempt to reconnect, and terminate if unable to reconnect; serve-retry-exit, which instructs Kea to continue serving DHCP traffic, attempt to reconnect, and terminate if unable to reconnect; and serve-retry-continue, which tells Kea to continue serving DHCP traffic, try to reconnect, and continue serving even if reconnection fails. This setting is particularly useful for connections to forensic logging and configuration backend services. HA is more responsive when recovering from communication failure. We introduced a new communication-recovery state. In this state, the load balancing servers remain responsive to DHCP queries when the communication between them is interrupted. The new feature is controlled using the delayed-updates-limit configuration parameter. The DHCP service can be independently enabled or disabled by a user command, by the database connection mechanics, or by the HA library. The DHCP service is disabled when any of those originators disable the service, and it is enabled when all those that previously disabled the service enable it. The servers can now recover from situations where both went to partner-down state and the communication was broken in one direction, but worked in the other. Synchronization of the standby server is now more robust. Cache Threshold Protects Kea from Broken Clients This popular ISC DHCP feature has now been implemented in Kea.\nSome clients renew their leases earlier than specified, either because they ignore the renewal timer or they are broken. Frequent early renewals put an extra burden on the server, which has to write updated leases even though they may have been renewed only seconds earlier. The cache-threshold (expressed as a percentage) and cache-max-age (expressed in seconds) parameters help reduce that extra burden on Kea. Kea still responds to the client but merely resends the existing lease lifetime, thus eliminating the need to update the lease database.\nSecurity Improvements We have made a start on transforming Kea into an application that does not require a trusted \u0026lsquo;bastion\u0026rsquo; host in a protected area of the network. This is a big adjustment for a core network application, so it is going to take a number of releases to implement the new security features required for this. We have focused initially on providing authentication and encryption for remote management connections to Kea.\nKea now supports basic HTTP authentication, as defined in RFC 6717. It is possible to configure a list of credentials (pairs of user identifiers and passwords) that the user or script must provide to use Kea\u0026rsquo;s REST API. Authentication information is logged on a dedicated logger, making it easier to implement security policies, such as logging to dedicated secure storage. Kea also obscures passwords in debug logs when the whole configuration is printed.\nKea\u0026rsquo;s Control Agent (CA) now supports TLS. Three modes of operation are available.\nThe first is plain HTTP with TLS completely disabled; this was the only mode in earlier releases. The second mode is encryption, where the CA accepts TLS connections. This is the typical mode when securing a website, where clients and servers are not under the control of the same organization. The third mode (and the default when TLS support is enabled) is mutual authentication between connecting clients and the CA server. In this mode, clients are required to identify themselves using TLS certificates: the clients verify the server\u0026rsquo;s certificate and the server verifies the client\u0026rsquo;s. See Section 23.1 of the Kea ARM for details. Configuration Flexibility While most of our focus was on performance, stability, and security, we have also added options to your configuration choices.\nGlobal reservations can now be used in conjunction with subnet-level reservations. Earlier Kea versions had a single configuration parameter called reservation-mode that governed whether host reservations were global (out-of-pool) or subnet-level (in pool), and it was not possible to use different reservation types at the same time. The reservation-mode parameter is now deprecated and replaced by three separate boolean parameters: reservations-global, reservations-in-subnet, and reservations-out-of-pool, each of which can be controlled independently.\nIt is now possible to configure preferred and valid lease lifetimes based on the client classification.\nThe Configuration Backend has been extended to include client classes. A number of new commands have been added to the cb_cmds subscriber hook library.\nKea is now able to drop packets coming from devices that have matching host reservations with class set to DROP (DROP class listed in the client-classes field in the reservations). This effectively allows the operator to selectively drop incoming packets from some devices, such as customers that have overdue payments, and misbehaving or unwanted devices\nThe forensic logging hooks library is now able to log custom expressions. The expressions can include any option (such as relay option 82) or sub-option (such as circuit-id, remote-id, or any other sub-option), packet fields, network interface names, local or remote IP address, and more. It uses the same expressions engine as when defining client classification or flexible identifiers. Evaluating expressions is a relatively expensive operation, so customized logs will have greater performance impact than the default log. The forensic logging hook library also supports flexible rotation intervals (e.g. using seconds or days) and \u0026ldquo;pre-rotate\u0026rdquo; and \u0026ldquo;post-rotate\u0026rdquo; actions which can be used to call an external script, e.g. to move or compress respective files whenever the rotate action is performed.\nNew Hook Runs Arbitrary Scripts Due to popular demand, a new hook that calls an arbitrary external script has been added. This can be any script that initiates an external process, such as updating routing and firewall rules for provisioned devices. The script is called asynchronously, i.e. Kea starts the script, does not wait for its completion, and continues processing the packet. Asynchronous processing greatly decreases the performance impact this hook might otherwise cause.\nNew GSS-TSIG subscriber-only hook We have added an early, experimental version of a new GSS-TSIG hook. DNS updates can be protected with dynamic GSS-TSIG keys that were previously retrieved by Kea using TKEY exchange. This is typically required for updates to Active Directory.\nCassandra Backend Deprecated The Cassandra lease backend is now deprecated, which means that the feature will be removed in our next stable branch.\nPlease update if you are running any version prior to Kea 1.8.2 With this new stable branch, we plan to end our support for Kea 1.6, per our published release model. We will continue to maintain the Kea 1.8 branch, as well as the Kea 2.0 branch, until we produce Kea 2.2. Our next development version will be Kea 2.1.0, and our first maintenance release of Kea 2.0 will be Kea 2.0.1.\nReferences\nKea ARM Kea Release Model Kea Significant Features matrix ","permalink":"/blogs/2021-09-29-kea-2.0/","tags":["Kea","DHCP","Performance"],"title":"Kea 2.0 - Performance, Stability and Security"},{"categories":["ISC","Open Source"],"contents":" As more and more organizations and companies have recognized the importance of professional support for their Internet infrastructure, ISC\u0026rsquo;s support team has continued to grow. We\u0026rsquo;re delighted to introduce our newest support engineer, Greg Choules!\nSince he started here in early August 2021, Greg has been tackling BIND tickets while learning more about Kea DHCP. He built and operated relatively large BIND installations for many years at previous employers, so he came with lots of BIND experience to share with our customers - and he looks forward to expanding his knowledge of ISC\u0026rsquo;s products. He is happy to join the open source community because, as he says, \u0026ldquo;As a general philosophy I think knowledge and solutions to problem should be open. Why keep something secret when there are so many wonderful minds out there that know something you don\u0026rsquo;t?\u0026rdquo;\nIn school, Greg was good at maths, chemistry, and physics; he studied chemical engineering at university but didn\u0026rsquo;t want to end up working for oil companies, so he transitioned to networking after graduation. He has worked at the University of London and Mercury Communications/Cable \u0026amp; Wireless, designing and supporting X.25, frame relay, ATM, and MPLS networks, and gradually moving towards DNS at the start of the GRX infrastructure for mobile roaming interconnect. Most recently, he was at Ericsson Services/Three UK, where he focused on building large recursive DNS infrastructures to cope with the rise of mobile data. His service-provider background will surely be helpful in supporting ISC\u0026rsquo;s many service-provider customers.\nSo far, Greg enjoys ISC\u0026rsquo;s family atmosphere and our \u0026ldquo;slightly warped\u0026rdquo; sense of humour. (The guy riding the carousel fits right in here.) He hopes to bring our customers \u0026ldquo;some practical knowledge from the user\u0026rsquo;s point of view, with a pragmatic approach to problem-solving.\u0026rdquo;\nGreg and his family live in southern England, where he is \u0026ldquo;passionate about plastic. Or rather, about abolishing it. The world needs to move to decomposable, plant-based alternatives immediately.\u0026rdquo; His planet-friendly ethos extends to his vegetarian-verging-on-vegan lifestyle and his support for electric vehicle development.\nWe are delighted to have Greg on our team and know that our customers will benefit from his expertise.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We hope that sharing a little about us helps strengthen our connection to our customers. We welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-choules/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Greg Choules!"},{"categories":["BIND","Open Source"],"contents":"DNS over HTTPS Update It has been a long time since our last blog on the BIND 9 DNS-over-HTTPS (DoH) implementation. Here is an update on the considerable progress made since then. Although we will discuss user-visible changes here, most of the changes happened under the hood: our implementation is now more mature and standards-compliant.\nIn some prior versions of the BIND 9.17 development branch, it was impossible to build BIND without DNS-over-HTTPS support. This is no longer required as of BIND 9.17.17. Thus, libnghttp2 is no longer a build time dependency for BIND. Use --disable-doh as the ./configure script option to build without DoH.\nHere are the topics we cover in this blog:\n1. We have added connection limits to protect the server from excessive and abusive traffic.\n2. We have simplified other DoH syntax\n3. We provide some tips on testing your DoH server\n4. We also describe another alternative for TLS offloading, using HAProxy in either TCP or HTTP mode\n5. We encourage you to try DoH, and provide an example BIND 9 configuration for setting up a DoH forwarder\nNew Configuration Option - listener-clients The first new option is http-listener-clients. It works very similarly to tcp-clients, but with one notable exception: it sets the limit on a per-listener basis, not globally. The http-listener-clients option sets the default per-listener quota size, which can be overridden by a listener-clients option within an http clause. Such an approach adds flexibility to the configuration, making more complex configurations possible. For example, one could make BIND listen on a public and private interface with different quota values.\ntls key-cert-pair { key-file \u0026#34;key.pem\u0026#34;; cert-file \u0026#34;cert.pem\u0026#34;; }; http local { ... listener-clients 300; ... }; options { ... http-listener-clients 300; ... listen-on tls key-cert-pair http local { 10.53.0.1; }; // DoH ... }; The default value for http-listener-clients is 300. Setting it to 0 disables the quota facility, which is useful for testing and benchmarking purposes. We settled for the value 300 for now because this value is large enough to serve some clients while not large enough to let the server be abused too much, taking into consideration that it might need to serve clients over other DNS transports. Because hardware and software configurations might differ widely, we suggest you do your own measurements to find an appropriate value for your deployment. A significant number of these connections will remain idle for some time after the clients completed the name resolution, which brings us to the following point.\nDNS-over-HTTPS works differently than other transports. In particular, the clients (primarily WEB-browsers) tend to keep connections open for longer periods compared to, Do53 or DNS over TCP. They do so to reduce latency during name resolution: if there is an existing HTTP/2 connection to the DoH server, they will issue a request over that connection rather than opening a new one. That brings another point - if you have plans to serve DNS-over-HTTPS, you might want to set tcp-initial-timeout, tcp-keepalive-timeout, and tcp-idle-timeout to the minimum values that work for you. Setting them to tcp-initial-timeout 100, tcp-keepalive-timeout 100 and tcp-idle-timeout 100 (ten seconds) is a good setting to try.\nBefore (http-)listener-clients was introduced, DoH connections shared a quota with other TCP clients. Idle HTTP clients would have prevented other TCP clients from being served. That was the initial motivation for adding these new DoH-specific quotas.\nNew Configuration Option - streams-per-connection A second new option is http-streams-per-connection. This option sets the hard limit of the concurrent HTTP/2 streams per connection. After the hard limit is reached, the HTTP/2 session will be closed by the server. The http-streams-per-connection option sets the default number of streams per-connection, which can be overridden by a streams-per-connection option within an http clause.\ntls key-cert-pair { key-file \u0026#34;key.pem\u0026#34;; cert-file \u0026#34;cert.pem\u0026#34;; }; http local { ... streams-per-connection 100; ... }; options { ... http-streams-per-connection 100; ... listen-on tls key-cert-pair http local { 10.53.0.1; }; // DoH ... }; The default limit for the number of streams per connection is 100. We choose this value because there is libnghttp2-based software that expects it to have at least that many simultaneous HTTP/2 requests made at once. With this in mind, we would suggest leaving the default value. However, one could consider lowering this number, as no sensible client would make that many concurrent requests. Setting it to 0 disables the limit, but doing so is strongly discouraged for anything but benchmarking in a controlled testing environment.\nThe combination of (http-)streams-per-connection and (http-)listener-clients allow the administrator to limit the load HTTP clients can create on a server. listener-clients manages the number of active HTTP/2 connections, while (http-)streams-per-connection limits the number of simultaneous streams in an HTTP/2 connection. By doing so, one limits the maximum amount of resources the server would use per HTTP/2 connection.\nHTTP Clause Syntax Simplified Everything within an http clause is now optional. As a result, there is no need to specify endpoints (absolute HTTP paths) within an http clause. When the endpoints option is omitted, the standard, default value is used for it (/dns-query). By the way, the syntax of absolute HTTP paths is now strictly verified in the configuration and should comply with the portion of the grammar given in the RFC3986 (see the path-absolute production definition in Appendix A for the details).\nHow to Test Your DoH Server Although we said that you should test your configuration in order to find the right values for the options discussed above, we have to admit that at this point BIND itself cannot provide you with enough feedback to help establish the best values. We plan to fix that eventually. For now, we could recommend using HTTP/2 benchmarking software for this purpose and rely on the feedback given by it.\nOne such tool is h2load by the libnghttp2 authors. The tool can be used to test the HTTP transport using both GET and POST methods, as they are required to be implemented by a compliant server.\nTo test a server in GET mode, one needs a DNS request in wire format encoded in base64url encoding. Get one directly from the RFC8484 section 4.1.1 (as we did in this example) or extract one from a packet trace and manually encode in base64url. (For example, see the Export Packet Bytes functionality in Wireshark to save binary data).\nExample Test Query h2load -t 8 -c 300 -m 100 -n 1000000 https://doh.example.com/dns-query?dns=AAABAAABAAAAAAAAA3d3dwdleGFtcGxlA2NvbQAAAQAB The example above will test the server https://doh.example.com with load simulating 300 concurrent clients (connections), opening up to 100 concurrent streams per connection using 8 threads doing 1000000 requests in total. The text after ?dns= is the DNS message encoded in base64url (A record for www.example.com). The tool will give feedback, including the number of queries per second in addition to statistics regarding the received HTTP status codes (2xx error codes in HTTP indicate success). Ideally, if the request data is in the right format, all of the queries should be answered with 2xx status codes.\nTo test a server using POST requests, we need a file containing a DNS request in wire format. The tool will make requests with the file data in their body. We also need to make sure that the header indicating the data type is included. Otherwise, BIND will treat such requests as malformed ones.\nExample Test Query using HTTP POST h2load -t 8 -c 300 -m 100 -n 1000000 -d ~/path/to/request_data_file.bin -H \u0026#34;content-type:application/dns-message\u0026#34; https://doh.example.com/dns-query The example is equivalent to the one above, except that it uses HTTP POST instead of GET.\nThe added options and tools discussed so far are not directly related to DNS. In fact a DNS server serving DoH queries is like a regular HTTP/2 server which happens to serve only DNS queries, rather than web pages. Any experience working and configuring HTTP servers is helpful in managing DoH.\nEncryption Offloading Using HAProxy In our first blog post we described how NGINX could be used for terminating TLS. There is another popular solution for encryption offloading (also known as TLS termination) and HTTP load balancing - HAProxy. We recommend reading the related parts of the previous update where we discuss the reasons for TLS termination in the context of DoH.\nHAProxy can be used in conjunction with BIND in two modes: TCP-mode and HTTP-mode. However, they provide different levels of flexibility. In TCP mode, HAProxy takes unencrypted traffic from BIND, which acts as a backend server. Then it applies encryption on top of data received via TCP without examining what is inside the TCP segments it encrypts. You can do TLS offloading this way by adding the following lines into the HAProxy configuration file:\nfrontend doh-in-tls mode tcp timeout client 10s # Here we specify \u0026#34;h2\u0026#34; as the ALPN token to be selected. # It is crucial for DoH to work. bind *:443 v4v6 tfo ssl crt /path/to/cert.pem alpn h2 default_backend doh-server-plain-tcp backend doh-server-plain-tcp mode tcp timeout connect 10s timeout server 10s # Address where BIND listens for unencrypted HTTP/2 requests server doh-server 10.53.53.53:80 In the example above, in the frontend section, we instruct HAProxy to listen on port 443 (default HTTPS port) for TCP connections that are proxied to a BIND server acting as backend. The BIND server’s address is 10.53.53.53, where it listens on port 80 (default plain HTTP port) for unencrypted DNS requests over HTTP/2 (as described in the backend section).\nHere one can see a significant advantage on HAProxy over NGINX when doing TLS termination - it allows specifying Application-Layer Protocol Negotiation (ALPN) tokens, which is vital for DoH.\nProxying of TCP connections is the most performant way to do TLS encryption offloading. It is recommended if you want to dedicate a whole domain name (like https://doh.example.com) specifically to the DNS server intended to serve DNS-over-HTTPS.\nLet’s discuss proxying in HTTP mode now. First, we will create a configuration very similar to the one above. For regular DoH clients, it will work in the same way, albeit maybe with a slight performance loss.\nfrontend doh-in-https mode http timeout client 10s # Here we specify \u0026#34;h2\u0026#34; as the ALPN token to be selected. # It is crucial for DoH to work. bind *:443 v4v6 tfo ssl crt /path/to/cert.pem alpn h2,http/1.1 default_backend doh-server-plain-http2 backend doh-server-plain-http2 mode http timeout connect 10s timeout server 10s # Address where BIND listens for unencrypted HTTP/2 requests server doh-server 10.53.53.53:80 proto h2 However, there is one important difference: now we have gained the ability to serve DNS-over-HTTPS queries over HTTP 1.1. By adding http/1.1 to the list of acceptable ALPN tokens, we are leveraging the HAProxy\u0026rsquo;s ability to convert between different HTTP protocol versions. That is something impossible to do in TCP mode.\nIf we want to, we could make HAProxy serve plain HTTP/1.1 by adding another frontend statement, similar to the following one:\nfrontend doh-plain-http1-1 mode http timeout client 10s bind *:80 v4v6 tfo # listen on TCP port 80 (plain HTTP) default_backend doh-server-plain-http2 One could ask: what is the benefit of providing HTTP 1.1 if the specification strictly requires HTTP/2 or higher? HTTP 1.1 might be needed if one has HTTP 1.1 reverse proxying infrastructure in place, where the frontend can serve multiple HTTP versions (including HTTP/2), while the backend is expected to serve HTTP 1.1. There are, for example, CDN providers which still work this way.\nWe also expect that, at some point, HAProxy will get HTTP/3 support, making it possible to serve DNS-over HTTP/3 in a similar way (the so-called DoH3). We may consider extending our DoH support to cover this case natively, too. Although it is too early to discuss it at this point, as there is no final HTTP/3 specification available yet.\nWe realise that running a DoH server behind existing HTTP caching reverse proxying infrastructure might be important to some of our users. To fully cover this case, though, we need to properly set the Cache-Control HTTP header. We have a Gitlab issue to keep track of this issue. If this is important to you please add your comments and use cases there.\nImproving Privacy while Offloading TLS Now it is time to discuss another situation where proxying in HTTP mode will be useful. Let’s imagine that you have a site, e.g. https://site.example.com, and you want to serve DNS queries under the standard path (https://site.example.com/dns-query) while using the domain name for the rest of the site. HAProxy allows one to do that in HTTP proxying mode with the feature called HTTP Routing. Here is a sample configuration implementing this approach:\nfrontend in-route-http-doh mode http timeout client 10s # Here we specify \u0026#34;h2\u0026#34; as the ALPN token to be selected. # It is crucial for DoH to work. bind *:443 v4v6 tfo ssl crt /path/to/cert.pem alpn h2,http/1.1 # Serve the data from the WEB-server by default default_backend local-web-server # If path in a request starts with /dns-query, # use the DoH server as the backend acl dns-query path_beg -i /dns-query use_backend doh-server-plain-http2 if dns-query backend local-web-server mode http timeout connect 15s timeout server 15s # WEB server’s address and port (plain HTTP 1.1) server web-server 127.0.0.1:80 # For HTTPS 1.1 use #server \u0026lt;IP address\u0026gt;:\u0026lt;port\u0026gt; ssl verify none # For HTTP/2 (encrypted) use #server \u0026lt;IP address\u0026gt;:\u0026lt;port\u0026gt; ssl verify none proto h2 In the configuration given above, there are two backends. One is the WEB-server serving the site; it runs on the local address 127.0.0.1, port 80. It is the default one. Another backend is the DoH server, serving the DNS queries. It is used when a request containing /dns-query as the HTTP path is made, as instructed in the frontend section.\nA configuration like this has one crucial, non-technical benefit: it enables plausible deniability for end-users making DNS queries against the DoH server. In this case, a third party analysing the traffic cannot distinguish the DNS queries over HTTP from regular requests of the site content, as at most only the site name is transmitted in clear text in SNI part of the TLS session packets, but not request paths. As a result, the end-user cannot be accused of doing DNS queries against an unauthorised DNS server in restricted environments.\nAs concealing DNS requests in regular HTTP traffic was one of the main goals of developing DoH, we recommend using a similar configuration to the DNS operators interested in deploying DoH, should they want to do TLS termination with HAProxy. We believe that a slight performance drop when using such a configuration is well-worth the end-users’ privacy benefits.\nIn this example the local-web-server backend configuration contains examples of using encryption between frontend and backend. That is exactly the case where BIND’s ephemeral certificates feature may be used.\nThis would be a good time to test in your environment We have only scratched the surface regarding the usage of HAProxy for doing TLS encryption offloading for DNS-over-HTTPS. As you can see, it is a capable tool, so if you plan to use it might be worth it to look at its documentation to end up with a configuration ideally suited for your environment. We used it a lot with our DoH implementation with good enough results to recommend it as a TLS termination solution for BIND, especially when BIND lacks some TLS-specific configuration options. That is one of the areas in which we want to improve. (See Gitlab #2795 and Gitlab # 2796).\nOur DNS-over-HTTPS implementation in BIND has come a long way since it was introduced in our development branch at the beginning of this year. Although we don’t recommend running production services on the development branch, our DoH implementation is certainly mature enough for testing. Consider testing it by running the latest development release instance configured as a DoH forwarder.\nHere is an example configuration for a DoH Forwarder # Private Key and certificate pair for TLS tls local-tls { key-file \u0026#34;/path/to/private_key.pem\u0026#34;; cert-file \u0026#34;/path/to/fullchain_cert.pem\u0026#34;; }; # Disable remote control capabilities controls { }; options { allow-recursion {any;}; max-cache-size 5%; # Forward all queries to the local resolver. # You may consider replacing it with a resolver’s IP # address. forwarders { 127.0.0.1; }; # Disable DNS over port 53 (Do53) listen-on-v6 { none; }; listen-on { none; }; # Listen for encrypted HTTP/2 queries on # all IPv4 and IPv6 addresses listen-on port 443 tls local-tls http default {any;}; listen-on-v6 port 443 tls local-tls http default {any;}; }; We want to share one tip on how to quickly test if your DNS-over-HTTPS deployment will work well with WEB-browsers: as soon as you have finished configuring the DNS-over-HTTPS server instance, visit the URL for DNS queries (e.g. https://doh.example.com/dns-query). If you see a blank page, then most probably you got it right. At the very least, this quick test ensures that the encryption configuration has been done correctly.\nIf you are considering deploying DoH using BIND, it is the right time to give it a try. We would love to hear from you should you find any problems or something lacking. We hope to hear from you in the cases when our implementation works well for you, too. In the meantime, we will continue improving the implementation as well as work on other BIND features. Stay tuned!\n","permalink":"/blogs/doh-update-sept-2021/","tags":["BIND","Privacy","DoH"],"title":"Update on DoH support in BIND 9"},{"categories":["BIND","Open Source","Events","Security"],"contents":"Raise Your Standards for Application Security If you work in IT you know you need to constantly refresh your cyber defense skills. If you are not already, it is time now to figure out how to apply the principles of Zero Trust Architecture to your BIND 9 systems.\nBIND 9 was developed during a simpler time. During the past several decades, network administrators have created and defended a perimeter around applications, shielding them from most kinds of attacks. With the explosion of BYOD, laptops walking in and out of the building, and remote working, it has become impossible to vet every device on the inside of the perimeter, so we can no longer rely on the network to protect critical applications. There are, however, some operating system features and firewall capabilities that you can deploy on your application servers to protect your BIND 9 instances.\nThis series of webinars, presented by Carsten Strotmann, will share a few tips and tools for securing the systems your BIND 9 instances run on. All sessions are appropriate for both resolvers and authoritative servers, and many of the techniques are applicable to other applications as well.\nRegister Here for the Series Register once for the entire series: all sessions will be recorded and the recordings posted within a few days on our website and our YouTube channel, so you can easily review any sessions you miss. Each session will last approximately one hour, including Q\u0026amp;A as time permits. Most sessions will include an optional hands-on session after the presentation, in which you can use a pre-staged VM to practice some of the skills you have just learned.\nSessions are scheduled at 15:00 UTC, but the day of the week varies. Please check your invite from Zoom for the correct time in your timezone, particularly as your local region may implement Daylight Saving Time.\nSession Topics: September 21 - Securing BIND 9 with SELinux (RHEL) - watch the recording October 20 - Securing BIND 9 with AppArmor/Firejail/SecompBPF - watch the recording November 16 - Instrumenting BIND 9 on Linux with BCC/eBPF - watch the recording December 15 - DNS Fragmentation: Real-World Measurements, Impact, and Mitigations Session 1 SELinux is an important security technology. While it can be used in every Linux system, the SELinux security policy is in practice only useable in Red Hat-based systems (RedHat EL, CentOS, Fedora, AlmaLinux \u0026hellip;).\nThis webinar will explain the SELinux technology:\nHow SELinux secures a BIND 9 DNS server How the Red Hat-supplied SELinux policies for BIND 9 work How users can adapt the SELinux Policy for BIND 9 How to troubleshoot BIND 9 installations with SELinux At the end of this webinar there will be an optional Hands-On Session.\nSession 2 This webinar will look at alternatives to SELinux.\nAppArmor on Ubuntu/Debian Securing a BIND 9 Server with \u0026ldquo;Firejail\u0026rdquo; Introduction to Secomp/BPF \u0026ldquo;syscall Firewall\u0026rdquo; Restricting Syscalls with Secomp and systemd Hardening a BIND 9 installation with Systemd At the end of this webinar there will be an optional Hands-On Session.\nSession 3 Instrumenting BIND 9 on Linux with BCC/eBPF.\neBPF, the \u0026ldquo;extended Berkeley Packet Filter\u0026rdquo;, is a powerful technology to instrument the Linux Kernel and applications. It will probably replace the traditional Linux Firewall (netfilter/iptables/nftables) in the coming years, so if you are not using it yet, you should learn now.\neBPF tools and scripts can be used to gain insight into running processes, such as named. eBPF also provides valuable information for performance tuning on Linux. eBPF sees not only the network packets (like a traditional firewall), but also the syscalls and other kernel information. So it is possible to create richer firewall rules based on application data.\nThis webinar gives an introduction into the eBPF/BCC toolset and shows how to use the tools to inspect a running BIND 9 process. We will also discuss some security applications for eBPF, such as finding the source of suspicious queries or traffic on the network.\nAt the end of this webinar there will be an optional Hands-On Session.\nSession 4 DNS Fragmentation: Real-World Measurements, Impact, and Mitigations\nIn this webinar, Carsten Strotmann will present the results of a study he is conducting (together with Roland van Rijswijk and Patrick Koetter) on DNS fragmentation. Packet fragmentation is emerging as an area of concern because of the potential for attacks leveraging this functionality.\nHow much DNS fragmentation is there in the Internet? DNS Fragmentation Attacks The issue with the older Linux Kernel versions in LTS Linux Distributions Mitigations against DNS fragmentation Speaker Information: Carsten Strotmann has 20 years of professional experience working in open source networking. He has delivered many webinars and training classes on open source solutions and DNS/DHCP/IPv6/Linux/Unix security. He also is the author of various articles on IT security topics in computer magazines.\n","permalink":"/blogs/bind-security-webinar-series-2021/","tags":["BIND","Webinars","Events","Training"],"title":"Securing BIND 9 - Webinar Series"},{"categories":["DHCP","Kea"],"contents":"Kea can use several different databases as storage \u0026ldquo;backends\u0026rdquo; for leases, host reservations, or configuration data.\nThe first database we integrated with was MySQL, because it offered robust clustering and was widely adopted. We added support for PostgreSQL in response to user requests, and our experience has been that PostgreSQL performs better and is currently more popular than MySQL. Kea added support for the CQL lease backend in Kea 1.1.0-beta1 and the CQL host backend in 1.4.0-beta1. These features never gained much traction with users, particularly compared to the level of interest in and deployments of the alternatives, MySQL and PostgreSQL. To date, we know of only one user for the Cassandra backend, so we are announcing plans to deprecate this feature.\nWhat Are the Issues With Cassandra? Unlike our other two backends, Cassandra is a non-relational NoSQL database. The non-relational nature of Cassandra makes it exceedingly difficult to implement more complex DHCP features such as the configuration backend, which requires over 20 tables of tightly coupled data that change over time and need to be kept in sync. With the Cassandra philosophy of data duplication, this requires creating and maintaining a massive number of tables. To be specific, there are 36 different types of get queries in the DHCPv4 code for the MySQL configuration backend. In the worst case, where each query requires its own table, this implies a duplication factor of over 70. This is clearly a very bad design. When we created the initial MySQL and PostgreSQL designs for the configuration backend, we also attempted to come up with a design for Cassandra, but it was a complete failure. We determined that Cassandra is simply not the right technology for this task.\nAnother problem with Cassandra is performance. In our performance tests MySQL and PostgreSQL were roughly 5-10 times faster than Cassandra, even though we did not do any special tuning for MySQL or PostgreSQL performance.\nCassandra setup is complicated. As of June 2021, Cassandra is not available in many major distributions. It requires custom installation, with native packages now limited to Debian only. The Debian packages available require Python 2 (which reached end-of-life on 1 Jan 2020) and uninstall some Python 3 packages. This is a very risky step in a production environment, because it removes the current 3.8 or 3.9 Python and installs an old, unsupported version. Support for Python 3 is only available in the alpha release of the upcoming Cassandra 4.0, which is not released as of July 2021. The user has a tough choice between running an antiquated version past its end of life or running unreleased alpha software. Neither option is reasonable in a production environment.\nCassandra is also very picky about the Java version. For example, on modern systems such as Ubuntu 21.04, Cassandra simply doesn’t start and produces no logs. After running Cassandra manually, it produces a cryptic \u0026ldquo;Improperly specified VM option ‘ThreadPriorityPolicy=42’\u0026rdquo; error message. This is an obscure indication that the Java version is too new (11.x) and needs to be downgraded (to 8.x).\nTo use C++ bindings (Kea is written in C++), a data driver is required. For a long time the data driver didn’t use the standard pkg-config approach and required custom hacking of the software detection. Compared to MySQL and PostgreSQL, which are widely available in all popular Linux and BSD distributions, setting up Cassandra is complex and the complexity is not decreasing over time.\nCassandra is also an ongoing maintenance burden. As we introduce new features in Kea, such as the ability to get database statistics that are synced between multiple Kea instances sharing the same database, we need to extend our API and we want to maintain parity between backends. Porting solutions between MySQL and PostgreSQL is frequently very easy but is almost always a problem with Cassandra. That is not a Cassandra flaw on its own; the core problem here is that it is different from the other solutions Kea supports.\nPlan to Deprecate Cassandra For these reasons, we are deprecating Cassandra support as of Kea 1.9.9. The feature will function as before in the Kea 2.0.x and 2.1.x series, but will print a warning. We plan to remove the feature entirely in a future release, possibly as soon as Kea 2.2.0.\nReferences\nKea ARM Kea Release Model Kea Significant Features matrix ","permalink":"/blogs/2021-07-28-kea-cassandra-deprecation/","tags":["Kea","DHCP"],"title":"Cassandra Database Backend to Be Deprecated From Kea"},{"categories":["BIND","Performance"],"contents":"This article focuses on benchmarking resolver performance, using a new methodology that aims to provide near-real-world performance results for resolvers.1\nAbout Resolver Testing Cache State and Timing Matter Resolvers don\u0026rsquo;t know any DNS answers by themselves. They have to contact authoritative servers to obtain individual bits of information and then use them to assemble the final answer. Resolvers are built around the concept of DNS caching. The cache stores DNS records previously retrieved from authoritative servers. Individual records are stored in a cache up to the time limit specified by the authoritative server (Time To Live, or TTL). Caching greatly improves scalability.\nAny DNS query which can be fully answered from cache (a so-called \u0026ldquo;cache hit\u0026rdquo;) is answered blazingly fast from the DNS resolver\u0026rsquo;s memory. On the other hand, any DNS query which requires a round-trip to authoritative servers (a \u0026ldquo;cache miss\u0026rdquo;) is bound to be orders of magnitude slower. Moreover, cache miss queries consume more resources because the resolver has to keep the intermediate query state in its memory until all information arrives.\nThis very principle of the DNS resolver has significant implications for benchmarking: in theoretical terms, each DNS query potentially changes the state of the DNS resolver cache, depending on its timing. In other words, queries are not independent of each other. Any change to how (and when) we query the resolver can impact measurement results.\nIn more practical terms, this implies a list of variables that we have to replicate:\nA stream of test queries resulting in a realistic cache hit/miss rate. For this purpose, we have to replicate the exact queries and also their timing. Answers returned by authoritative servers, including TTL values. Network conditions between the resolver and authoritative servers (latency, packet loss, etc.). Cache size and other parameters affecting cache hit/miss ratio. The traditional approach implemented, e.g., in ISC\u0026rsquo;s Perflab or using the venerable resperf tool, cannot provide realistic results because it ignores most of these variables.\nThe second implication is that even the traditional QPS metric (queries answered per second) alone is too limited when evaluating resolver performance: it does not express the type of queries, answer sizes and TTLs, query timing, etc.\nOther performance-relevant variables include:\nThe protocol used between client and server (UDP, DNS-over-TLS, DNS-over-HTTP/2). DNS server setup. All of the \u0026ldquo;usual suspects\u0026rdquo; such as hardware, network driver, kernel versions, operating system configuration, firewall, etc. But these are not fundamentally different from benchmarking authoritative servers, so we will not delve into details.\nYou Can\u0026rsquo;t Simulate the Internet The long list of variables above makes it clear that preparing an isolated laboratory with a realistic test setup is very hard. In fact, ISC and other DNS vendors have learned that it\u0026rsquo;s impossible; realistic resolver benchmarking must be done on the live Internet.\nDevelopers from CZ.NIC Labs wrote a test tool called DNS Shotgun for this purpose. It replays DNS queries from traffic captures and simulates individual DNS clients, including their original query timing. The resolver under test then processes queries as usual, i.e., contacts authoritative servers on the Internet and sends answers back to the simulated clients. DNS Shotgun then receives and analyzes the answers.\nObviously, benchmarking on a live network cannot provide us with perfectly stable results. To counter that, we repeat each test several times and always take fresh measurements instead of using historical data. E.g., a comparative test of BIND versions 9.16.10 and 9.16.18 (which were released half a year apart) requires us to measure both versions again. This process ensures that half a year of changes on the Internet and our test system do not skew our comparison.\nFor each test run, we start with a new resolver instance with an empty cache. This way, we simulate the worst case of regular operation: it is as if the resolver was restarted and now has to rebuild its cache from ground zero.\nLet\u0026rsquo;s have a look at the variables we measure and how to interpret them.\nInterpreting Resolver Behavior The QPS metric alone is not particularly meaningful in the context of regular DNS resolver operation. Instead, we measure indications that resolver clients are getting useful answers.\na) Response rate - Does the resolver answer within a time limit?\nThis metric serves as a sanity check: a resolver has to answer the vast majority of queries within the client\u0026rsquo;s time limit because an answer one millisecond after the client times out is useless.\nThe fact that a resolver does not answer typically indicates significant overload. Still, it can also happen naturally right after resolver startup: the resolver has an empty cache, and all queries cause cache misses, require orders of magnitude more processing, and thus lead to much lower throughput. In a steady state, most queries cause a cache hit, leading to higher throughput.\nExcept for this startup phase, we generally expect a resolver to answer all the queries, except for packets malformed beyond recognition. The proportion of malformed queries which naturally occur in traffic depends on client behavior and changes over time.\nb) Response code (RCODE) - How many failures do we observe?\nAnother sanity check is the proportion of RCODEs in received answers. Immediate answers are useless if all of them are SERVFAIL (or other error codes), so we need to check the proportion of RCODEs in answers we receive. Usually, the vast majority of traffic should be NOERROR and NXDOMAIN answers, but SERVFAIL, FORMERR, and REFUSED also occur naturally.\nAlso, the proportion of RCODEs depends on client behavior and changes over time.\nc) Response latency - How quickly does the resolver respond?\nFinally, we arrive at the most useful but also the most convoluted metric: response latency, which directly affects user experience. Unfortunately, DNS latency is wildly non-linear: most answers will arrive within a split-millisecond range for all cache hits. Latency increases to a range of tens to hundreds of milliseconds for normal cache misses and reaches its maximum, in the range of seconds, for cache misses which force communication with very slow or broken authoritative servers.\nThis inherent nonlinearity also implies that the simplest tools from descriptive statistics do not provide informative results.\nTo deal with this complexity, the fine people from PowerDNS developed a logarithmic percentile histogram which visualizes response latency. It allows us to see things such as:\n95 % of queries were answered within 1 ms (cache hits) 99 % of queries were answered within 100 ms (typical cache misses) 99.5 % of queries were answered within 1000 ms (problematic cache misses) and so on.\nEven more importantly, a logarithmic percentile histogram allows us to compare the latency of various resolver setups visually.\nFinally, we are finished with the theoretical introduction and can start discussing our results.\nTest Results Data Set and Load Factor For realistic results, we need a realistic query data set. This article presents results measured using traffic captures (of course anonymized!) provided by one European telecommunications operator.\nThese traffic captures contain one hour of traffic directed to 10 independent DNS resolvers, all of them with roughly the same influx of queries. In practice, we have 10 PCAP files: the first with queries originally directed for resolver #1, the second with queries directed to resolver #2, etc.\nThese traffic captures define the basic \u0026ldquo;load unit\u0026rdquo; we use throughout this article: traffic directed to one server = load factor 1x. To simulate higher load on the resolver, we simultaneously replay traffic originally directed to N resolvers to our single resolver instance under test, thus increasing load N times. E.g., if we are testing a resolver under load factor 3x, we simultaneously replay traffic originally directed to resolvers #1, #2, and #3.\nThis definition of load factor allows us to avoid theoretical metrics like QPS and simulate realistic scenarios. For example, it allows us to test this scenario: \u0026ldquo;What performance will we get if nine out of 10 resolvers have an outage and the last resolver has to handle all the traffic?\u0026rdquo;2\nTest Design Here is the basic testbed setup we used to compare the BIND 9.16 series of releases to equivalent BIND 9.11 versions. We intentionally are not providing the exact hardware specifications to prevent readers from an undue generalization of results.\nWe use two servers: one to simulate (many) DNS clients using DNS Shotgun, and the other to run the DNS resolver under test. Each machine has 16 logical CPUs (eight physical cores with hyperthreading enabled) and 42 GB of operating memory. DNS Shotgun is configured to replay the original query stream (including timing) from one or more (original) telco resolvers to one machine running a resolver under test, with 2000 ms timeout on the client-side.3 BIND is configured with max-cache-size set to 30 gigabytes. Practically, all other values are left at default settings: the resolver is doing full recursion and DNSSEC validation. Also, the resolver has both IPv4 and IPv6 connectivity. The resolver and client machine always start from a completely clean state; most importantly, the resolver always starts with an empty cache. This approach allows us to measure the worst-case scenario, \u0026ldquo;how quickly will the resolver recover after a restart?\u0026rdquo; In practice, we inspect resolver behavior in the first 120 seconds, and expect service recovery within the first 60 seconds. Of course, 120 seconds is a short test compared to regular resolver uptime. We are focusing on the worst-case scenario, which is an empty cache. Depending on client behavior, the resolver can handle even more load after it has had more time to fill its cache. By starting in a clean state, we ensure that the performance levels described in this article can be safely reached without worrying about system restarts, complicated load balancing, etc. There is one point I cannot stress enough:\nIndividual test results like response rate, answer latency, maximum QPS, etc., are generally valid only for the specific combination of all test parameters, the input data set, and the specific point in time.\nIn other words, results obtained using this method are helpful ONLY for relative comparison between versions, configurations, etc., measured on the exact same setup with precisely the same data and time.\nFor example, a test indicates that a residential ISP setup with a resolver on a 16-core machine can handle 160 k QPS. It\u0026rsquo;s not correct to generalize this to another scenario and say, \u0026ldquo;a resolver on the same machine will handle a population of IoT devices with 160 k QPS on average\u0026rdquo; because it very much depends on the behavior of the clients. If all of our hypothetical IoT devices query every second for api.vendor.example.com AAAA, the resolver will surely handle the traffic because all queries cause a cache hit. On the other hand, if each device queries for a unique name every second, all queries will cause a cache miss and the throughput will be much lower. Even historical results for the very same setup are not necessarily comparable because \u0026ldquo;something\u0026rdquo; might have changed on the Internet.\nPlease allow me to repeat myself:\nThis test was designed to compare BIND 9.11 to BIND 9.16, handling a specific set of client queries at a specific point in time. Depending on the test parameters and your client population, your results could be completely different, which is why we recommend you test yourself if you can.\nBaseline Performance: BIND 9.11.34 To establish a baseline, we replay 120 seconds of traffic from one randomly selected resolver in our data set to BIND v9.11.34. Let\u0026rsquo;s inspect the resolver performance in detail:\na) Response rate - Does the resolver answer within a time limit?\nFirst, we plot the percentage of responses received within the 2-second time limit over time.\nAt first glance, the resolver is able to answer the vast majority of queries starting from the third second of the test, which is good. At the same time, we can see tiny drops distributed seemingly randomly across the time axis. Possible explanations include:\nThe test environment is not reliable. The resolver under test is not reliable. The data set contains bursts of queries that take more than 2000 ms to resolve. The data set contains malformed queries which the resolver does not respond to. \u0026hellip; or a combination of these factors. To get more data, we repeat the same test nine times - and we can see drops at the precisely same places, with very similar amplitude. Let\u0026rsquo;s zoom in on one such drop:\nAfter nine test runs, we can see the drops are reliably reproducible, which practically rules out noise caused by the test environment or the resolver itself. Also, we are using a battle-tested version of BIND from the 9.11 series, which makes it unlikely BIND itself would be terribly broken and cause these drops.\nA remaining hypothesis is that something in the data set is causing this. To verify it, we re-ran the test using data captured from other telco resolvers. We confirmed that the distribution of drops changes for each resolver and stays stable across multiple test runs.\nIn other words, we have confirmed that our data set (consisting of \u0026ldquo;normal\u0026rdquo; telco traffic) contains weird queries, which is something we have to live with: we are testing with real-world data!\nb) Response code (RCODE) - How many failures do we observe?\nWe have established that the resolver answers a reasonable proportion of queries. Now we have to check if the resolver answers \u0026ldquo;sensibly,\u0026rdquo; i.e., that response codes SERVFAIL, REFUSED, and FORMERR are only a small fraction of the answers. To do this, we can take the measurement results we already have and plot each RCODE as a separate line:\nWe can see NOERROR answers usually represent 90-95 % of all answers, and NXDOMAIN oscillates roughly around 4 %. SERVFAIL, REFUSED, and FORMERR are also present, and their proportion randomly goes up and down, most likely depending on what weird queries clients send and how many broken authoritative servers the resolver has to contact. Also, we can see that after 100 seconds, a client sends a high volume of queries that generate REFUSED answers.\nAgain, we verify this is the property of our data set by inspecting test results for other telco resolvers we have data from. Only two out of ten traffic captures contain spikes in REFUSED answers, which confirms our hypothesis that the error codes we observe result from suspicious client behavior.\nAgain, we have confirmed that DNS traffic is the wild west, and any resolver must deal with it.\nc) Response latency - How quickly does the resolver respond?\nMeasuring latency right after resolver startup would be misleading because the cache is empty, leading to an unrepresentative cache hit ratio. To counter this problem, we visualize latency data only from the second half of the test, which represents what users see during normal operation.\nThe following chart is an enhanced version of a logarithmic percentile histogram. Each test was repeated nine times, and the line shows average latency. The shaded area around the solid line denotes minimum and maximum values across all runs. The results are bi-modal, with answers served from cache shown on the lower right section of the chart, and the lines in the center and upper left sections showing the longer tail of latency for queries requiring recursion.\nThe Y-axis shows latency, while the X-axis is the percentile rank of the slowest queries. Translated to words:\nLess than 7 % of answers have latency higher than 1 ms. I.e., 93 % of queries are answered within 1 ms, which clearly indicates a cache hit. Less than 3 % of queries have latency higher than 10 ms. I.e., (7 % - 3 % =) 4 % of queries are answered within 1-10 ms. 1 % of queries have latency higher than 100 ms. I.e., 2 % of queries take between 10-100 ms to resolve. Less than 0.6 % of queries have latency higher than 1000 ms. I.e., 0.4 % of all queries require 100-1000 ms to resolve. 0.5 % of queries do not get an answer within the 2000 ms time limit. I.e., 0.1 % of all queries require 1000-2000 ms to resolve. The shading shows the minimum and maximum latency from nine test runs, which gives us an idea about result stability:\nFor sub-millisecond latency, we can mostly ignore the background color because the actual latency is mixed with noise caused by many factors. Answers with latency of 1-100 ms roughly represent cache misses for domains on well-behaving and well-interconnected authoritative servers. The minimums and maximums are very close to each other, which indicates the results are pretty stable, usually within 1 % percentile rank and a couple of milliseconds on both sides. For answers with latency higher than 100 ms, the range of latencies observed in nine test runs gets wider and wider, which is also expected. These answers come either from faraway authoritative servers or domains with some troubles and require query retries. It becomes important what server the resolver under test decided to contact, which is a process involving randomness. How Much Load Can The Resolver Handle? We have established a baseline, using BIND 9.11.34 and traffic from a single telco resolver, i.e., load factor 1x. The next question is: How can we usefully compare the maximum performance of a resolver running BIND 9.11.34 to one running 9.16.19?\nIdeally, the resolver will answer the same percentage of queries as it did under the baseline load as the load factor increases. When the resolver starts losing queries, it is overloaded. This value is visible in the latency chart in the upper left corner as the percentile rank on the X-axis, where the line touches the timeout limit on the Y-axis. For BIND 9.11.34 under load \u0026ldquo;1x our telco resolver,\u0026rdquo; the normal percentage of unanswered queries is around 0.5 % (which consists of either severely malformed queries or queries pointing to domains that require more than 2000 ms to resolve).\nThe second and more sensitive criterion is overall latency. Suppose we overload the resolver only a bit. In that case, it will still manage to answer almost all the queries, but latency will increase. Latency is an area where operators can set arbitrary limits. This article uses the (admittedly vague) criterion \u0026ldquo;latency is acceptable if it does not significantly exceed the latency observed under the baseline load.\u0026rdquo; In other words, it\u0026rsquo;s bad if the latency plot for higher loads lies in the \u0026ldquo;up and right\u0026rdquo; direction from the original baseline on the latency histogram.\nHere we can see that concentrating traffic from seven originally independent telco resolvers on a single machine running BIND 9.11.34 actually improves latency! The main reason is an improved cache hit rate, which happens naturally when more traffic concentrates on a single resolver. The cache also helps with getting answers from half-broken domains: even if the first query for a broken domain times out on the client side, BIND will continue resolving it and eventually cache the answer3. With more clients sending traffic to the same resolver, chances are higher that another client will send a query for the same broken domain. Then the client will get an answer from the cache, leading to a lower overall ratio of client timeouts.\nLet\u0026rsquo;s try to increase load even more by sending traffic from eight resolvers to one:\nThis time, increasing the load to 8x the baseline did not significantly improve the ratio of answers with latency smaller than 100 ms. It somewhat increased latency for very slow answers. Even more importantly, the shaded background in the top-left quadrant indicates the resolver is working hard. We are on the verge of increasing the ratio of queries that time out.\nWe can push a bit harder and try to load a factor of 9x:\nThis chart shows that a load factor of 9x is too much for BIND 9.11.34 to handle. The proportion of queries that timed out is a bit higher. The shaded backgrounds between load factor 8x and 9x do not overlap, which indicates this relatively small difference is not a result of random noise. Also, the proportion of answers with \u0026ldquo;problematic\u0026rdquo; answers with latency higher than 100 ms is a bit higher, which indicates the resolver is working really hard but not keeping up.\nBased on this data, we can conclude that load factors of 7x to 8x are about the maximum load the resolver can handle without leading to a degraded user experience. In other words, we can safely direct traffic from seven to eight \u0026ldquo;original\u0026rdquo; resolvers to a single instance, with load factor 7x being more on the safe side.\nWe have now found the performance limits of BIND 9.11.34, and finally, we can compare it with its successor: the BIND 9.16 series.\nBIND 9.16.19 Performance We use the same resolver configuration and traffic to test both versions. Let\u0026rsquo;s jump straight to tests with load factor 7x, which is about the maximum BIND 9.11.34 can safely handle, and compare it with BIND 9.16.19:\nFrom this chart, we can see that version 9.16.19:\nAnswers slightly more queries (reduction of about 0.1 % in query timeouts). Provides a more predictable latency for answers obtained from half-broken domains (indicated by narrower color background for answers with latency higher than 100 ms). Overall, 95 % of queries have lower or the same latency as version 9.11.34. For the 5 % of queries with latency between 1 to 6 ms, the newer version incurs a latency penalty between 0 and 1.5 ms, compared to the old version. The higher latency for 5 % of queries was pretty disappointing for our engineering team. Users will not notice a difference between answers arriving in 5 or 6 ms, but our engineers could not get it out of their minds. This was a matter of principle! Eventually an investigation led to the removal of four lines of code which fixes this issue. This fix is scheduled for release in August 2021.\nWe have established that the resolver running BIND 9.16.19 is at least as performant as BIND 9.11. Let\u0026rsquo;s see what happens if we push harder and double the load on BIND 9.16.19:\nCurrently, we can see the resolver still works fine and answers more queries than version 9.11.34 would answer under even half the load. Doubling the load increased latency of 15 % of queries by (at most) 2.5 ms, which is very good.\nLet\u0026rsquo;s see what happens under load 15x:\nWe can clearly see that load factor 15x is too much for BIND 9.16.19. Even though the resolver still answers queries as it should, the wide shaded background area indicates that the latency of answers is wildly unstable. Also, on average, the latency is worse than it was in all previous experiments.\nConclusions We have extensively tested BIND 9.16.19 resolver performance using traffic captures from a telecommunications operator. We conclude that this new version outperforms the resolver in BIND 9.11.34. A minor glitch, which incurs about 1-2 ms latency for a small percentage of answers, is already fixed and will be released in August 2021.\nWe embarked on this benchmarking project because we had multiple anecdotal reports from users of performance regressions in the BIND 9.16 resolver. Using the test method described above, we were able to confirm this regression in versions of BIND 9.16 prior to 9.16.19 and identify multiple issues introduced by the refactoring in that branch. By repeating the test over several months as we modified the BIND code, we were able to eliminate the problems and confirm that 9.16.19 now performs as well as or better than the 9.11 series.\nLast year, we published measurement results comparing the performance of BIND versions. The older article was primarily focused on the authoritative DNS server use-case. It also included one test for DNS resolver performance, but we have learned that the test was not realistic enough to predict real-world performance.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nTo simulate higher load factors, we slice and replay the traffic using the method described in this video presentation about DNS Shotgun around time 7:20. Most importantly, this method retains the original query timing and realistically simulates N-times more load. This method works under the assumption that the additional traffic we simulate behaves the same way as the traffic we already have. I.e., if you have 100,000 clients already, the assumption is that the next 100,000 will behave similarly. This assumption allows us to re-use slices of the original traffic capture from 10 resolvers to simulate the load on 20 resolvers.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nThe DNS Shotgun timeout of 2 s was selected to reflect a typical timeout on the client side. BIND uses an internal timeout of 10 s to resolve queries; the resolver continues resolving the query even after the client has given up. This extra time allows the resolver to find answers even with very broken authoritative setups and cache them. These answers are then available when the clients ask again.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"/blogs/bind-resolver-performance-july-2021/","tags":["BIND","performance"],"title":"Comparative Resolver Performance Results of BIND Versions - July 2021"},{"categories":["DHCP","ISC"],"contents":"ISC plans to end maintenance of the ISC DHCP client and relay by the end of Q1, 2022. We will continue maintaining the DHCP server, and any code that is common between the server, client, and relay, for at least a couple more years. Our January 2020 and 2022 releases of ISC DHCP incorporated a number of submitted patches, mostly for the client, but otherwise we have done little recent maintenance on the client or relay. To be honest, those components have simply not been a focus of ours for several years. As a small non-profit organization, we have to allocate our limited resources carefully and we think the ISC DHCP server and Kea distribution have more future value for users.\nWhy are we doing this? The ISC DHCP code is extremely mature, and is not designed for unit-test ability. The test coverage is just not good enough to inspire confidence that any change, even a seemingly minor one, would not cause some unidentified breakage.\nISC has no support customers for the ISC DHCP client or relay, and we haven’t for at least a decade, so there is no funding stream to support continuing effort on them. We are not even sure whether anyone is even updating their client or relay code from our distribution anymore. We think it is likely that some operating systems are maintaining DHC client or relay implementations or forks that are \u0026ldquo;better maintained\u0026rdquo; and more modern than our current client or relay code. For numerous reasons - most notably, for software security - it is important to use software that is actively maintained.\nTimeline We know that ISC DHCP has a large user base, but nearly all those users get the software via a package or distribution and we have no way to communicate with them directly.\nApril 20, 2021 - Consulted a major OS packager asking what the impact would be if ISC ended client maintenance. May 22, 2021 - Emailed a few dozen people who had submitted patches or particularly detailed bug reports in the past two years, saying we intended to end maintenance of the client and relay and asking if any of them were interested in becoming community maintainers. We received only a single reply, from someone willing to join a maintenance committee, but this was not enough to make up a team. June 17, 2021 - Sent a public email and request for comment to the DHCP-users mailing list. June 18, 2021 - Made an announcement to the DHCP-announce mailing list. July 2, 2021 - Posted this blog on the ISC website. Q1 2022 - We intend to produce an 4.4.3 release, which will declare in its release notes that it is expected to be the last release containing the client and relay components of ISC DHCP. We hope this message will reach any additional active users who are not subscribed to the DHCP-announce or DHCP-users lists. H1 2022 - After the last update to the client and relay in 4.4.3, we plan to issue ISC DHCP 4.4.4, removing the client- and relay-specific code from the distribution. What about the users of the DHCP client and relay? ISC DHCP has been, and remains, open source. Anyone can fork it and develop or maintain it. Users still have all the open source freedoms with ISC DHCP that they have always had. We are just announcing that ISC will no longer maintain this code. The client and relay code is findable in the ISC DHCP open repository. Older releases of ISC DHCP (going all the way back to 1998) are archived on ISC\u0026rsquo;s FTP server.\nIf you are using an operating system that relies on the ISC DHCP client, there is no reason to worry that it will stop working. Your OS maintainer\u0026rsquo;s team is already providing the client script for DHCP, and they are probably testing it with each release. If you do encounter an issue with the DHCP client, you should report it to your OS provider. Several of them do already accept and address issues with the DHCP client.\nThere are forks of the ISC client and relay included in operating system packages, and there are also alternative client and relay implementations available. For example:\nRoy Marples offers a dhcpcd client. OpenBSD distributes a DHCP client apparently based on the ISC DHCP client, that has been updated by Henning Brauer. FreeBSD also packages that client. BusyBox distributes a very small client, suitable for embedded applications. OpenDHCP reportedly includes a relay. OpenWRT packages that and a DHCP forwarder maintained by Scott Logan. Dibbler, which is also no longer maintained, includes a client and relay as well as a server. ISC is also considering developing a new relay based on Kea DHCP code. Feedback welcome We welcome your comments, questions, and suggestions. If you are aware of a well-maintained implementation of either a DHC client or relay, and would like to suggest users switch to that alternative, please feel free to email us at marketing@isc.org with that information and we will update this blog post.\nWe have maintained ISC DHCP for over 20 years, and it is time to archive the ISC DHC client and relay. We would like to thank all the contributors who have sent us patches (mostly upstreaming client patches from operating system distributions) over the years.\n* this blog was updated in January 2022, to change the date for the 4.4.3 release, as it was not issued in 2021 ","permalink":"/blogs/dhcp-client-relay-eom/","tags":["DHCP"],"title":"ISC DHCP Client and Relay End of Maintenance"},{"categories":["BIND","DNS"],"contents":"It is summertime and even though you might rather be at the beach, this could be a good time to check in on what has been going on with BIND.\nThe biggest projects for the past year have been continuing the implementation of the new, refactored network interface using libuv, and the addition of support for the new DoT and DoH encrypted transports, including updating dig to send DoH queries. Back in January of this year, we announced a change in our release model, moving from 12-month branches to 24-month branches to give us more time on a new feature branch to complete large refactoring projects like the network manager, because we anticipate more similar projects in the future. We have made several adjustments to our Serve-Stale feature based on user feedback, and we have continued making improvements to the Key and Signing Policy (KASP) tool first published in 9.16. We have also updated our DNS Cookie support, tightening policy as deployment of DNS Cookies improves across the Internet. DNS Cookies are a useful and lightweight defense against the whole category of spoofing attacks on the DNS, such as the recently published SADDNS cache poisoning vulnerability.\nMost of our efforts are spent on investigating bug reports, fixing bugs, modernizing and addressing technical debt. We are using a number of automated tools to help us find errors in BIND. Last fall we joined the Google OSS Fuzzing project, which promptly found 3 significant issues. We also use scanning services from lgtm, Coverity Scan, and Sonar Cloud. We test in house with LibFuzzer and AFL in addition to the cloud tools. We run thread and address sanitizers from LLVM/CLANG and GCC.\nIn the past year we have added dozens of new or updated system tests, closing over 60 test-related issues, and in the past 6 months we have developed a new resolver performance test bed, based on the DNS Shotgun tool from CZNIC, which is \u0026rsquo;loaded\u0026rsquo; with actual captured, anonymized queries shared with us by users. We have also implemented pairwise testing of all build options (there are many of these in BIND).\nOver the past couple of years we have made significant investments in building BIND packages for our users. With our monthly update cycle, it sometimes feels as if we are continuously re-building packages for Ubuntu, Fedora, Debian, CentOS and our Docker image. In response to questions about what platforms different BIND branches are supported on, we have created a matrix in our knowledgebase.\nWe are following the NIST efforts to develop new standards for secure software development. In addition, we are updating our answers to the questions for the Linux Foundation\u0026rsquo;s Core Infrastructure Initiative Best Practices Badge. We are trying to achieve compliance with one of the higher levels (we have Bronze, but they have recently added Silver and Gold levels.)\nThis spring we presented a series of classes developed and delivered by Carsten Strotmann. He covered logging, metrics, using dnsdist with BIND and using dynamic zones. Unfortunately it is too late to participate in the hands-on exercises, but the sessions were recorded. These might be a useful resource for new BIND administrators.\nAs always we welcome your comments on BIND development and our roadmap, and your active participation on the bind-users mailing list. We are currently looking to add another BIND developer to our team, so if you know any experienced developers who might be interested in working on a large and complex open source project, please let them know!\nNotes on Specific Branches BIND 9.11 is approaching End of Life We will continue maintaining BIND 9.11 through the end of 2021, and will patch any significant security issues through Q1, 2022. Although we don\u0026rsquo;t usually change features in an open source Extended-Support (ESV) version, in the 9.11-S edition, we are backporting updates to the serve-stale feature as we are finding conflicts with other features, such as pre-fetch.\nWe are updating the platforms we test 9.11 on to the current versions of the supported operating systems. We have replaced Ubuntu 16.04 LTS with Ubuntu 18.04 LTS, added FreeBSD 13.0, OpenBSD 6.9 and Fedora 34. See the updated Supported Platforms https://kb.isc.org/docs/supported-platforms article in the knowledgebase.\nWe are continuing to add system tests for all supported branches, including 9.11. We are also checking performance of each release vs the prior one to ensure that we are alerted to any performance regressions (and there are none in these end of branch 9.11 releases).\nUsers still running 9.11, and we know there are many of them, should start planning their migration to 9.16 now. We recommend doing some non-production testing with your configuration if at all possible, and then deploying gradually, while monitoring for performance changes. 9.11 users upgrading will find the BIND ARM for 9.16 and later online at ReadTheDocs. DNSSEC users should consider adopting the new Key and Signing Policy tool that facilitates key management in 9.16.\nBIND 9.16 approaches ESV With the June release of 9.16 we will have completed the transition to the new libuv-based network manager infrastructure. Our internal testing shows that recent changes to the network manager have fixed some resolver performance problems that have been plaguing 9.16 since we introduced the network manager. With these changes BIND 9.16 consistently performs as well or better than 9.11. Now we can finally end the refactoring work in this branch and declare 9.16 an Extended-Support version (ESV) in July, putting it into a more stable maintenance-only mode. At that time, we will advise more conservative users to start migrating from 9.11.\nWe will be posting a separate blog sharing some of our performance testing experience and results for 9.16.\nEncrypted DNS moves to 9.18 In order to minimize ongoing churn in the 9.16 branch for ESV status, we have decided to retract our earlier pledge to backport DNS-over-HTTPS (DoH) and DNS-over-TLS (DoT) to 9.16. Backporting that to 9.16 at this point (we have almost no operator feedback on it yet) could mean months of further changes in 9.16. The next stable version, BIND 9.18, is now only six months away (expected release January 2022) and that is a better target for production deployment of DoH and DoT. If you are using either DoH or DoT in BIND, we would love to hear from you (use the bind-users list to share your experience).\nPlans for BIND 9.18 We are currently projecting our new stable branch in January, early in Q1. We have a long list of pending customer-requested minor enhancements, many associated with troubleshooting or monitoring. We will try to do as many of these as possible before we create the new 9.18 branch. We have some work on-going on zone transfers and are planning additional RPZ logging. We have more work to do to make our DoH implementation more scalable. (This is a background activity based on the very low level of interest we have seen from the BIND user community.) We are also working on documentation improvements, including migrating some content from the ISC knowledgebase to the BIND ARM.\nRemove support for the Windows environment We are removing support for the Microsoft(TM) Windows platform. Our reasons are:\nNeither VisualStudio 2017, which we use, nor VS2019 support the C11 features we use extensively (stdatomic.h), which forces us to write shims on top of the Windows API. No BIND 9 developer normally uses Windows at this time. BIND 9 does not compile on Windows 10, nor with VS2019; making either of those work would require extensive effort. Windows now has WSL2 (https://docs.microsoft.com/en-us/windows/wsl/install-win10), which can be used to run BIND 9 natively. ISC now publishes a Docker image for BIND 9. The resources that would be required for us to support new Windows and Visual Studio versions would be better spent elsewhere. The June release of the development branch (9.17.14) will be the last release with Windows support included. We have removed the Windows support code from our main branch. The next stable branch, BIND 9.18 will not come with Windows as supported platform. We will try to create a separate dig.exe and post that as a separate download, so people can still use dig on Windows. If we are able to do this, it will not be an officially maintained product.\nWhile this decision sparked controversy on the bind-users mailing list, when we raised this issue with our support customers several months ago, we received zero requests to extend support for BIND on Windows from them.\nBuilt-in PKCS#11 support on the chopping block for 9.18 Currently, BIND 9 has two PKCS#11 interfaces:\nnative PKCS#11 OpenSSL engine PKCS#11 from the OpenSC project ISC has sponsored significant improvements to the OpenSC engine_pkcs11, and the next OpenSC version (libp11 0.4.12) will include those improvements. The new version has better performance and is maintained by people with specific expertise in PKCS#11. Therefore, we intend to drop the native PKCS#11 interface from BIND 9.18 and recommend the OpenSC implementation instead.\n","permalink":"/blogs/bind-update-summer2021/","tags":["BIND"],"title":"BIND Updates"},{"categories":["BIND","DNS"],"contents":"DNS Over HTTPS With BIND 9.17, Ubuntu 21.04 And LetsEncrypt This is a repost of a blog by Richard Neal from TalkDNS, reprinted with permission.\nDNS Over HTTPS DNS over HTTPS (DoH) is becoming much more prevalent now. Both Google Chrome and Mozilla Firefox have supported it since 2019 and Microsoft Windows 10 21H2 is expected to introduce support for it later in 2021. With this in mind it might be time to start planning to support DNS over HTTPS if you run a BIND DNS server.\nThis article shows how to configure BIND 9.17 (with experimental DNS over HTTPS support) on Ubuntu 21.04 and using a LetsEncrypt certificate. Using LetsEncrypt means it’s a no-cost solution and is fully supported by most DNS over HTTPS clients – including Google Chrome, Mozilla Firefox, Microsoft Edge, and of course BIND’s own dig command.\nEmail Address And Server Name Throughout this article I’ll use the following values for email address (for the LetsEncrypt certificate) and for the server name:\nEmail address: certs@talkdns.net Server name: ns1.talkdns.net Wherever you see these values below you should of course replace them with your own email address and your own server name.\nConfiguration Steps We’ll go through the following steps in this article:\nInstall Ubuntu Server 21.04 Install BIND 9.17 (Development release) Configure a very basic named.conf.options purely to get a running BIND instance Test BIND to make sure it’s working at this point Install Certbot to manage our LetsEncrypt certificate Reconfigure AppArmor to allow named to access our LetsEncrypt folders Add the necessary lines to named.conf.options to support DNS over HTTPS Test DNS over HTTPS name resolution Setup Certbot to automatically renew the certificate and to reload named’s configuration files whenever it does so All done! 1. Install Ubuntu Server 21.04 Whilst the following steps may very well work with an older Ubuntu release I have specifically tested this on Ubuntu 21.04 (Server edition). You can download their ISO from here.\n2. Install BIND 9.17 (Development Release) The Ubuntu 21.04 repositories include BIND 9.16, but DNS over HTTPS is currently only available in the BIND 9.17 Development release (specifically 9.17.10 or higher). In order to install BIND 9.17 we therefore need to add the ISC’s development branch repo’s:\n$ sudo add-apt-repository ppa:isc/bind-dev $ sudo apt-get update $ sudo apt install bind9 As BIND9 installs you will see references to ppa.launchpad.net. This confirms that it’s the BIND 9.17 development release that’s being installed rather than Ubuntu’s default BIND 9.16.\nNow we need to make sure BIND starts whenever the server (re)boots:\n$ sudo systemctl enable named If you’re running ufw (the Uncomplicated firewall) then you’ll also need to open the necessary ports for BIND9:\n$ sudo ufw allow Bind9 $ sudo ufw allow http $ sudo ufw allow https We need both HTTP and HTTPS because LetsEncrypt’s Certbot runs on HTTP to validate and generate an HTTPS certificate for our BIND server.\n3. Configure A Very Basic named.conf.options Purely To Get A Running BIND Instance I’m going to make the assumption that your BIND server is behind a firewall on your LAN. As such it won’t be accessible to any clients other than those on your network. For the purposes of this article I’m therefore including only a very basic rudimentary named.conf.options file which will allow BIND to start successfully. You shouldn’t use this bare bones named.conf.options file on a production instance or one that’s exposed outside your LAN.\n$ sudo nano /etc/bind/named.conf.options Update the file so it has only the following rows. Alternatively if you’re a seasoned BIND administrator and you have your own standard configuration then please use it. The purpose of the following configuration is simply to prove that BIND “works” before we then start enabling DNS over HTTPS.\noptions { directory \u0026#34;/var/cache/bind\u0026#34;; recursion yes; allow-recursion { any; }; listen-on { any; }; listen-on-v6 { any; }; # you can remove this line if you don\u0026#39;t need IPv6 support dnssec-validation auto; }; 4. Test BIND To Make Sure It’s Working At This Point We should now start named and make sure traditional DNS name resolution is working (over UDP and TCP port 53). First we validate our named.conf.options file:\n$ sudo named-checkconf /etc/bind/named.conf.options If that command returns no results then we’re all good – BIND has found no errors in our configuration file. So now we’ll start the named daemon and then run a quick test to make sure DNS name resolution is working correctly. We issue the systemctl restart command in case this system has been rebooted already (remember that we set named to start automatically earlier in this process):\n$ sudo systemctl restart named $ dig @ns1.talkdns.net isc.org A If all goes well you should get the IP address of isc.org: 149.20.1.66\n5. Install Certbot To Manage Our LetsEncrypt Certificate Now it’s time to install Certbot to manage our LetsEncrypt certificate:\n$ sudo apt install certbot Once installed we’ll use Certbot to issue our certificate. Remember to replace the email address and server name with your own of course!\n$ sudo certbot certonly --standalone --preferred-challenges http --agree-tos --email certs@talkdns.net -d ns1.talkdns.net If all goes well you should see a message similar to the following:\nCongratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/ns1.talkdns.net/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/ns1.talkdns.net/privkey.pem Your certificate will expire on 2021-08-28. The contents of the “live” folder are actually symlinks to the real certificate which is stored in the “archive” subfolder of /etc/letsencrypt/\nNow we need to fix the permissions on the live and archive folders so that named has permission to read these files:\n$ sudo chmod 0755 /etc/letsencrypt/live $ sudo chmod 0755 /etc/letsencrypt/archive And then we need to change the group and permissions for the private key file\n$ sudo chgrp bind /etc/letsencrypt/live/ns1.talkdns.net/privkey.pem $ sudo chmod 0640 /etc/letsencrypt/live/ns1.talkdns.net/privkey.pem 6. Reconfigure Apparmor To Allow named To Access Our LetsEncrypt Folder By default AppArmor will block named from accessing the certificates in the LetsEncrypt folder. Rather than modifying AppArmor’s named profile (which would be overwritten during an upgrade) we create a special AppArmor local file. These local files are a supported way of telling AppArmor that we have a slightly special configuration and they are preserved during an upgrade.\n$ sudo nano /etc/apparmor.d/local/usr.sbin.named This file should contain a single line as follows, including the comma at the end. It allows recursive read-only access to all files and folders in /etc/letsencrypt:\n/etc/letsencrypt/** r, Now we need to reload the named AppArmor profile:\n$ sudo apparmor_parser -r /etc/apparmor.d/usr.sbin.named 7. Add The Necessary Lines To named.conf.options To Enable DNS Over HTTPS Enabling DNS over HTTPS support in BIND means editing the named.conf.options file again. We need to make two changes:\nTell BIND where our private key and certificate are stored Set the appropriate listen-on statement to tell BIND to listen on port 443 $ sudo nano /etc/bind/named.conf.options Add the following lines to the very top, above the options { statement:\ntls local-tls { key-file \u0026#34;/etc/letsencrypt/live/ns1.talkdns.net/privkey.pem\u0026#34;; cert-file \u0026#34;/etc/letsencrypt/live/ns1.talkdns.net/fullchain.pem\u0026#34;; }; Then add the following lines in the main options statement (e.g. right before the final “};“:\nlisten-on port 443 tls local-tls http default {any;}; listen-on-v6 port 443 tls local-tls http default {any;}; # you can remove this line if you don\u0026#39;t need IPv6 support Save the file, and then let’s check again that our named.conf.options file is OK:\n$ sudo named-checkconf /etc/bind/named.conf.options Provided all is well we now reload named so that it reads our configuration changes. We use reconfig rather than reload because we haven’t made any changes to our zone files:\n$ sudo rndc reconfig If we now quickly check the syslog we should find that BIND has loaded successfully:\n$ tail /var/log/syslog Make sure there are no errors regarding file access, the TLS cert, or AppArmor\n8a. Test From The Local System We can test DNS over HTTPS from the local system by using dig and specifying a DoH query by using the +https parameter:\ndig +https @ns1.talkdns.net isc.org A If all is well then you should once again get the IP address of isc.org: 149.20.1.66. If you look closely at dig’s output you should also see something similar to the following. This confirms that the query/response operation was performed successfully over HTTPS (TCP port 443) rather than traditional UDP port 53:\n;; SERVER: 51.210.161.197#443(ns1.talkdns.net) (HTTPS) Alternatively if DNS over HTTPS isn’t working then you’ll either get an error message or dig will timeout.\n8b. Test From A Remote System Ideally you should now test from a remote system as well. If you’re using ufw or any other software firewall then that will also confirm that you have the necessary ports open. If you’re running more than one name server (and if not, why not?!) then you can run dig (from BIND 9.17.11 upwards) to make an identical query as above.\n9. Setup Certbot To Automatically Renew The Certificate And To Reload named’s Configuration Files Whenever It Does So Our last step is to make sure that Certbot is setup to renew the certificate when necessary, and to reload named’s configuration whenever it does so. This steps is important because otherwise named won’t be using the correct HTTPS certificate.\n$ ls /etc/cron.d Make sure that “certbot” exists (if not then go to the Certbot homepage to read how to setup automated renewal)\nNow we need to add a deploy hook to Certbot so that named reloads its configuration files whenever Certbot renews a certificate. Here again we use reconfig rather than reload because we’re not making any changes to zone files at this point. On a BIND server with many zone files it’s much quicker to simply reconfig rather than go through a full (and unnecessary) reload:\n$ sudo nano /etc/letsencrypt/cli.ini Add the following line at the end:\ndeploy-hook = rndc reconfig You can test this by running:\n$ sudo certbot renew --dry-run You should see:\nDry run: slipping deploy hook command: rndc reconfig 10. All Done! That’s it! You’re all done. You now have a BIND 9.17 name server running DNS over HTTPS using an auto-renewing LetsEncrypt certificate.\n","permalink":"/blogs/doh-talkdns/","tags":["BIND","DoH"],"title":"DNS Over HTTPS With BIND 9.17"},{"categories":["ISC","Company News"],"contents":"We\u0026rsquo;re happy to share another short profile of one of our engineers and we hope our customers enjoy learning a little more about our team!\nIn our last profile post, we said that Chuck Stearns was our newest Support Engineer. However, he no longer holds that title, as Everett Fulton joined the ISC team on March 8, 2021. It turns out that Chuck and Everett have a lot in common!\nLike Chuck, Everett most recently worked for Rackspace Technologies, and he lives in Texas. And they are both largely self-taught: Everett has always been an avid computer enthusiast, starting out with a TRS-80 Model I. His early work background was in avionics and radio systems engineering; he then moved to a wireless ISP to combine his radio experience with networking and systems operations. (It was there that he learned\u0026ndash;the hard way\u0026ndash;not to configure an open resolver!) During his 15.5 years at Rackspace, he started on the Linux support team but then transitioned to an operational role as a DNS and NTP engineer.\nEverett has remarked that \u0026ldquo;The shift to specialized software support has been a refreshing change from my previous operations-centric experience. ISC\u0026rsquo;s customers have a wide variety of approaches to service delivery.\u0026rdquo; He is happy to be working on open source software because \u0026ldquo;Open source has always been about freedom. It tends to provide a wide range of features and very few arbitrary limitations. Users can look at the source to see how it works, modify it, and share their improvements with others.\u0026rdquo;\nOf coming to ISC, he says, \u0026ldquo;I get to work with some amazingly intelligent folks. ISC is one of the legendary organizations at the very core of the Internet, and it\u0026rsquo;s the ultimate honor to be a part of it.\u0026rdquo;\nOne of the many strengths Everett brings to our support team is his perspective as a former ISC support customer. He says, \u0026ldquo;There\u0026rsquo;s no better way to understand what our customers want! My operational experience is a good complement to the software engineering side of the house.\u0026rdquo;\nDespite his somewhat forbidding appearance, Everett is a true Renaissance man. He quotes Robert A. Heinlein when asked to share his personal philosophy:\n\u0026ldquo;A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.\u0026rdquo;\nWe\u0026rsquo;re so happy to have him as part of the ISC team and are confident that our support customers will benefit from working with him.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-fulton/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Everett Fulton!"},{"categories":["BIND","Open Source","Security"],"contents":"Recently announced TsuNAME vulnerability does not impact BIND 9 Last week researchers announced the TsuNAME vulnerability that impacted several of the largest hosted DNS services on the Internet, including Google\u0026rsquo;s public DNS and Cisco\u0026rsquo;s OpenDNS. ISC was notified of the vulnerability long before the public disclosure, and we verified that BIND already had defenses against this problem.\nTsuNAME is a problem that affects authoritative zone operators hosting zones that have cyclical dependences between the NS RRsets in their zones. This causes some resolvers to loop, endlessly sending queries while following the referrals. The result is the authority servers are overwhelmed and the resolver resources are consumed with these unending queries. This has actually happened with at least two ccTLD operators. The solution is for zone operators to detect and remove cyclical dependencies. In addition, all DNS resolvers should have limits in place to curb excessive looping that might occur due to following cyclical NS delegations.\nBIND already has recursion limits that prevent looping. As it says in the advisory:\n\u0026ldquo;We configure a test zone with cyclic dependency and evaluate popular DNS resolvers: Unbound (v 1.6.7) [8], BIND (v 9.11.3) [5], and KnotDNS (v 5.1.3) [1], on a VM on AWS EC2 (FRA). We found that none of the start looping in the presence of cyclic dependent domains, hence are not vulnerable to it.\u0026rdquo;\nISC has tested and evaluated BIND and confirmed it is not vulnerable to this problem. All modern BIND Resolvers are unaffected by this issue because of these two resolver limits (tunable) that prevent BIND from looping endlessly:\nmax-recursion-queries\nThis sets the maximum number of iterative queries that may be sent while servicing a recursive query. If more queries are sent, the recursive query is terminated and returns SERVFAIL.\nmax-recursion-depth\nThis sets the maximum number of levels of recursion that are permitted at any one time while servicing a recursive query. Resolving a name may require looking up a name server address, which in turn requires resolving another name, etc.; if the number of recursions exceeds this value, the recursive query is terminated and returns SERVFAIL.\nOur advice to DNS operators: Resolver operators: Run a modern version of BIND that includes max-recursion-depth and max-recursion-queries feature options. The default settings should be appropriate in almost all situations (they do not need to configured explicitly).\nSee also CVE-2020-8616 in which we ensured that there are no exceptions to these checks:\n[security] Further limit the number of queries that can be triggered from a request. Root and TLD servers are no longer exempt from max-recursion-queries. Fetches for missing name server address records are limited to 4 for any domain. (CVE-2020-8616) [GL #1388] Authoritative zone owners and hosts: It is not possible via simple inspection of individual zones to see if they could be involved in a cyclical dependency as described in the authors of the TsuNAME paper (with one exception - that being that all nameservers for a zone are within that actual zone itself). Therefore we recommend using a cyclical dependency checker tool to test your zones now, and on a regular basis hereafter. The authors of the TsuNAME paper suggest using this tool:\nhttps://github.com/SIDN/CycleHunter\n","permalink":"/blogs/2021_tsuname_vulnerability/","tags":["BIND","Security","CVE"],"title":"TsuNAME DNS Vulnerability and BIND 9"},{"categories":["Open Source","ISC","Annual Reports"],"contents":"With great pride, we present ISC\u0026rsquo;s annual report for 2020. The year was a terrible one in so many ways, but our organization has been fortunate. The open source software we create and maintain is a critical part of the Internet infrastructure, and the Internet has become more important than ever. We are happy to share our accomplishments with you.\nWe invite feedback on our annual report at marketing@isc.org, and we thank our support customers and contributors for their ongoing support of our mission.\n","permalink":"/blogs/iscs-2020-annual-report/","tags":["2020","ISC","BIND","Kea","DHCP","ISC DHCP","Stork"],"title":"ISC’s 2020 Annual Report"},{"categories":["Open Source","Kea"],"contents":"We have updated the Kea release plan in our Knowledgebase to show that we expect to issue new major versions approximately every year, and each one will be supported for two years.\nThe next stable version will be Kea 2.0 The next major branch will be called Kea 2.0, in recognition of the major architectural changes included, and it will be released in Q3 of 2021. Kea 2.0 will be our first stable version that is entirely multi-threaded; it will also be more secure, providing native HTTPS authentication for the API. Both of these features have required many smaller commits to develop, so users have seen the features appear in Kea 1.9 development versions, but we want to complete the implementations before we release the next branch. We know that many Kea users are eager to deploy these features in production as soon as possible.\nWe maintain a matrix showing significant features and when they were first introduced, which will be updated when we release Kea 2.0.\nWhat is a stable version? A stable version is one in which we minimize any potentially destabilizing changes. Generally, new features are not added, although in rare cases we backport a feature that current deployments need for improved stability or manageability. If changes need to be made to an existing feature, we try to find a way to maintain the pre-existing default behavior. Normally, we issue maintenance releases on a stable version only when there are significant bugs to address. If you don\u0026rsquo;t see any new minor versions, that is good news.\nHowever, due to maintenance, it is possible that there may be changes to the REST API and/or the database schema in a minor release of a stable version. If you update from one version of Kea to another, even a minor version change, you will need to also update whatever Kea hooks you are using. We version-control the hooks along with the Kea versions, even if there is no change in the hook library itself.\nWe do not plan to add new operating system versions to the ISC packages for a stable version. When we produce a new stable version we assess the currently supported versions of our supported OSes and try to maintain those for the life of the stable branch. In the case of an operating system with very short lifecycles (e.g. Fedora), we may need to make an exception to this policy, for example if all supported OS versions go EOL during the life of the stable ISC branch.\nKea roadmap for new feature development If you are interested in following updates to our development roadmap, you may view the monthly milestones we create in GitLab. When we begin work on a new milestone we create a list of issues we would like to address. This list is typically an ambitious goal, and nearly always there are some issues we don\u0026rsquo;t complete during the milestone that are then deferred to a later milestone. If there is an issue you are particularly interested in, you are welcome to upvote it or add a comment to that effect in our GitLab issue tracker.\n","permalink":"/blogs/kea-2021-roadmap/","tags":["Open Source","Kea"],"title":"Kea Roadmap Update"},{"categories":["ISC","Company News","F Root","Compliance"],"contents":"Internet Systems Consortium today submitted comments on the proposed NIS2 Directive to the European Commission. The Directive may unintentionally draw all 12 of the world\u0026rsquo;s root server operators into regulatory coverage. This brings the potential to ignite multinational regulatory conflict, to destabilize the diversity that underpins global trust in the root name server system, to undermine the current multistakeholder arrangements that govern the engineering structure of the Internet, and to fragment the Internet’s global unitary DNS system.\nThe submission is also available online on the European Commission website.\n","permalink":"/blogs/isc-response-to-nis2/","tags":["F Root"],"title":"ISC Response to NIS2"},{"categories":["BIND","Open Source","Security"],"contents":"Status of DNS-over-HTTPS support in BIND 9 as of March, 2021 The latest development release of BIND 9 contains a significant number of improvements to DNS-over-HTTP (DoH). The most noticeable one is the initial support for DoH in the dig utility. It is something that was lacking in the previous release, so we are completing this missing piece. Now dig can be officially added to the Hall of Fame of DoH diagnostic tools.\nAlthough this is the first release to include this support, it is complete enough to test real-world DoH deployments. It is safe to say that if a DoH deployment does not work with dig, some other clients might have troubles using the service as well. It is particularly good at detecting Application-Layer Protocol Negotiation (ALPN) configuration errors, which might be a non-obvious aspect of DoH deployment for some of our users. We have also tested that it works as expected with many well-known public DNS resolvers supporting DoH.\nAll HTTP methods required by RFC8484 are supported (GET and POST) for both encrypted and unencrypted connections, with POST being used by default. Here is an example of dig usage for querying the A record for isc.org via the DoH server doh.example.com using the default DoH endpoint location (/dns-query):\n$ dig +https @doh.example.com isc.org A ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.17.10 \u0026lt;\u0026lt;\u0026gt;\u0026gt; +https @doh.example.com isc.org A ; (1 server found) ;; global options: +cmd ;; Got answer: ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 56070 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1232 ; COOKIE: e40f41fc38f3768e01000000604a20425472ceb9ec62ced0 (good) ;; QUESTION SECTION: ;isc.org.\tIN\tA ;; ANSWER SECTION: isc.org.\t60\tIN\tA\t149.20.1.66 ;; Query time: 306 msec ;; SERVER: 1.2.3.4#443(doh.example.com) (HTTPS) ;; WHEN: Thu Mar 11 15:50:58 EET 2021 ;; MSG SIZE rcvd: 80 Dig supports the following methods for querying a DoH-enabled DNS server:\n+https, +https-post - send a query using the HTTP POST method via an encrypted connection; +https-get - send a query using the HTTP GET method via an encrypted connection; +http-plain, +http-plain-post - send a query using the HTTP POST method via an unencrypted connection; +http-plain-get - send a query using the HTTP GET method via an unencrypted connection. In the case when a non-default endpoint location should be used (normally /dns-query), one can be specified after the method preceded by the equal sign (e.g. +https=/alternative-location).\nThe unencrypted modes are supposed to be used for cases when TLS offloading is planned. In these cases, unencrypted HTTP/2 queries are sent without any HTTP protocol version negotiation. As far as we are aware, BIND is the only open source DNS server that supports this feature.\nFor additional information, you may want to consult the relevant manual pages.\nAlthough the addition of DoH support to dig is the most noticeable change to BIND\u0026rsquo;s DoH functionality, the server-side code has received a stream of important changes and stability improvements as well. One of the more noticeable ones from the end-user\u0026rsquo;s perspective is BIND\u0026rsquo;s ability to return meaningful HTTP status codes in case of errors.\nThe following HTTP status codes are returned by BIND:\n200 OK - the query was received and processed by BIND so that it could return an answer, even if it is a negative one (e.g. SERVFAIL); 400 Bad Request - this signifies that the request contains some consistency errors (e.g. an unexpected header for the given request type, Content-Length equals 0 in the case of POST request etc); 404 Not Found - the wrong endpoint location was specified in the request; 413 Payload Too Large - the size of the payload in POST-request was too large; 414 URI Too Long - this means that there was too much data in the :path pseudo-header in the case of GET-request (which usually means that there was too much Base64URL encoded data in the request); 415 Unsupported Media Type - this means that either the client sent a request of any other media type than application/dns-message, or the client has not specified that it supports this media type; 501 Not Implemented - this means that the type of the HTTP request is anything but POST or GET; 500 Internal Server Error - the generic error code that is returned in any other cases. It usually signifies some low-level problem (e.g. validated Base64URL data from the GET request cannot be decoded). Here we followed Unbound\u0026rsquo;s lead and roughly used the same status codes in similar situations (https://blog.nlnetlabs.nl/dns-over-https-in-unbound/). We have to note, though, that in some cases, BIND just resets an HTTP stream if the request is too large or looks malicious, without returning any error codes.\nAlso, it is now possible to write a shorter configuration file for BIND if all you want is to serve DoH with default settings using a default as an HTTP endpoint name in listen-on statements:\nPrivate key and certificate pair for TLS tls local-tls { key-file \u0026#34;/path/to/priv_key.pem\u0026#34;; cert-file \u0026#34;/path/to/cert_chain.pem\u0026#34;; }; options { listen-on port 53 {any;}; listen-on-v6 port 53 {any;}; allow-recursion {any;}; # example for DoH using default configuration # listening on all IPv4 addresses. # port number can be omitted listen-on port 443 tls local-tls http default {any;}; # the same for IPv6 listen-on-v6 port 443 tls local-tls http default {any;}; }; You can compare the example above to the similar example from the previous post. We expect most users to use the default configuration for DoH unless they have some very specific needs.\nEphemeral certificates (for testing purposes only) An additional feature might be handy for testing purposes (although it does not directly relate to DoH). Analogous to specifying the default HTTP endpoint in listen-on statements, one could specify ephemeral as a TLS certificate-key pair. In this case, BIND generates a TLS certificate and key at runtime, saving a bit of the hassle with doing so by hand. In this case, we can make the configuration example above even shorter:\noptions { listen-on port 53 {any;}; listen-on-v6 port 53 {any;}; allow-recursion {any;}; listen-on port 443 tls ephemeral http default {any;}; listen-on-v6 port 443 tls ephemeral http default {any;}; }; It is worth mentioning that these ephemeral certificates cannot work in real-world deployments, because clients would fail to validate them.\nAlthough BIND with DoH support was first released just a month ago, a lot has happened in this direction since then. DoH support is still an experimental feature, but we ship a much more mature version with this release. However, there are many things to improve and bugs to fix. Hence, we continue working hard to make it possible for our users to deploy this feature in the future, and make it easier to provide a viable alternative to existing centralised services and contribute to the diversity of the Internet infrastructure.\n","permalink":"/blogs/bind-doh-update-2021/","tags":["BIND","Privacy","DoH"],"title":"BIND DoH Update"},{"categories":["ISC","Open Source"],"contents":"Here\u0026rsquo;s another short profile of one of our engineers. We hope our customers enjoy learning a little more about our team!\nChuck Stearns, our newest Support Engineer, joined ISC on January 1, 2021. Chuck spent nearly 14 years working in frontline support and on DNS engineering at Rackspace Technology; prior to that he ran his own web design business and worked at several local ISPs near his home in Texas.\nChuck started working on Linux during his freshman year of college, when a dorm buddy opened his eyes to it. Linux got him interested in more than application space, and over the last 20+ years he\u0026rsquo;s learned about the internals of Linux \u0026ldquo;from userland down through kernel space and even out onto the network.\u0026rdquo; Although he hasn\u0026rsquo;t had a lot of formal computer science education, his hands-on practical experience definitely comes in handy as he works with ISC\u0026rsquo;s customers.\nHe says that so far, there are two things that he really enjoys about ISC:\nThe people. \u0026ldquo;They are tremendously smart, and I\u0026rsquo;ve been welcomed into the family. That feels really good.\u0026rdquo; \u0026ldquo;The idea that I get to help contribute to something essential to the Internet as a whole is very rewarding.\u0026rdquo; In his previous technical support and IT systems engineering work, Chuck\u0026rsquo;s experience with software delivery was limited to a team scale, so the support role at ISC has come as a bit of a shock. \u0026ldquo;It\u0026rsquo;s nothing like I thought it would be, going in. At ISC, relationships with our customers and each other are a priority - we try to be kind AND helpful at the same time. It\u0026rsquo;s a refreshing perspective.\u0026rdquo;\nHe has chosen to work on open source rather than commercial software because \u0026ldquo;open source is more than a computer program. It\u0026rsquo;s got a heartbeat. It drives technically inclined people - and anyone else who\u0026rsquo;s interested - to put forth their best work selflessly, for the benefit of all.\u0026rdquo; He also likes that \u0026ldquo;if I find a broken piece, or something I need to change to suit my individual need, I can patch it and make it work.\u0026rdquo; He adds, \u0026ldquo;the documentation and communities around many open source projects are absolutely top-notch, which is a great resource whether you\u0026rsquo;re contributing or implementing.\u0026rdquo;\nChuck also notes that there is much more activity in the open source community than he expected. \u0026ldquo;Within my first week, I was put in touch with some folks at DNS-OARC to discuss functional documentation aspects of catalog zones, something with which I\u0026rsquo;ve had a bit of practical experience in the past. I\u0026rsquo;ve also learned a lot about what is involved in the process of maintaining and releasing enterprise-level software. I\u0026rsquo;ve still got a lot of learning left to do, but the immersion into the larger community has been quite enlightening.\u0026rdquo;\nChuck says that one of his particular strengths is that \u0026ldquo;I\u0026rsquo;ve always had a knack for taking complex technical topics and translating them into human-readable formats, which I hope will prove useful during my tenure here.\u0026rdquo;\nChuck\u0026rsquo;s hobbies include writing novels and listening to music. He says, \u0026ldquo;Texas blues is great, but it\u0026rsquo;s really hard to beat that old Delta style, man.\u0026rdquo;\nWe\u0026rsquo;re delighted to welcome him to the ISC team and know that our support customers will enjoy working with him.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org!\n","permalink":"/blogs/isc-profile-stearns/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Chuck Stearns!"},{"categories":["ISC","Company News"],"contents":"Here at ISC, we are constantly working to improve our software and to look forward at what our customers and users will need from us in the future. But it\u0026rsquo;s important to also look back at our past and see how far we\u0026rsquo;ve come.\nIn that spirit, we present the ISC Timeline, which is now available for public view. Of course, no timeline can include everything, but we have attempted to catalog some of the important moments of ISC\u0026rsquo;s history and the history of DNS/DHCP in general.\nWe have done our best to represent events and people accurately, and we apologize for any oversights or errors. If you have any moments that you think we should add, or if you have any other feedback for us, please let us know at marketing at isc.org.\nThank you to everyone who has been and/or continues to be part of ISC\u0026rsquo;s journey! We think our history is pretty impressive and hope you will too.\n","permalink":"/blogs/isc-timeline/","tags":["Timeline"],"title":"ISC Timeline"},{"categories":["BIND","Open Source","Security"],"contents":"DNS over HTTPS (DoH) in BIND 9 The February 2021 development release of BIND (9.17.10) comes with initial support for DNS-over-HTTPS (DoH). DoH is a major feature for end-user privacy. DoH conceals DNS traffic inside HTTPS to make it harder for third parties to analyse, intercept, and modify. Deployment of DoH is also a significant stepping stone for wider adoption of the Encrypted Client Hello (ECH) and Encrypted Server Name Indication (ESNI) features of the Transport Layer Security (TLS) protocol. These features further improve users\u0026rsquo; privacy by preventing third parties from snooping, and by making it harder to block websites by analysing and intercepting TLS handshake requests. Without ECH, TLS handshakes by default contain server names in unencrypted form.\nDNS-over-TLS (DoT) is a popular alternative to DoH. BIND also supports DoT. A BIND server can accept queries over traditional DNS (aka Do53), DoH, and DoT. Which transport is used for an individual client query depends on what the client uses to contact BIND.\nThe initial release of DoH is in a development release. Since this is an experimental feature, details are subject to change. As soon as this functionality matures enough, we will backport it into the current stable branch (9.16.X).\nBIND’s DoH implementation is based upon the nghttp2 library written by Tatsuhiro Tsujikawa. In BIND 9.17.10 nghttp2 is a hard build requirement. (*we will fix this before we backport the feature to a stable branch.) It is safe to say that starting from this release we have a specialised HTTP/2 server built into BIND specifically to serve DNS-over-HTTPS queries.\nBIND\u0026rsquo;s support in this initial release is server-side only. There is no DoH support for dig yet. We are working hard to fill this particular missing piece in a forthcoming release. So, for now, to test if DoH is functional one needs to use a third-party tool. We have tested it with Mozilla Firefox, kdig from Knot DNS and DNS dog. For a list of DoH clients, see the DNS Privacy Project website or the curl project page on DoH.\nFeatures of the BIND DoH implementation Despite being experimental, and the most recent major addition to BIND\u0026rsquo;s functionality, our DNS-over-HTTPS implementation has some unique features.\nFirstly, even though DoH was designed to protect lookups from clients to caching resolvers and DNS load balancers, BIND also has native DoH support for the authoritative DNS server as well. The DoH transport layer should work in the same way as any other transport layer for DNS, so BIND over DoH should handle any DNS request you could expect from other DNS transports. It is not clear how this might be used yet, but it is interesting to note.\nThe second unique feature is the ability to offload TLS encryption to another server. We added this feature because, in some environments, management of TLS certificates is tied to control of web infrastructure and, thus, is the responsibility of other teams and individuals. For these and similar scenarios, we added to BIND ability to serve unencrypted HTTP/2 connections over TCP, so that encryption might be applied later. Having unencrypted HTTP/2 streams might also make troubleshooting (e.g. using Wireshark) easier.\nAn important thing should be said regarding TLS-offloading. The software which performs offloading needs to negotiate HTTP/2 over TLS connection using the Application-Layer Protocol Negotiation (ALPN) extension; otherwise, many DoH clients will assume that the established TLS connection is for HTTP 1.1. BIND does not support HTTP versions prior to HTTP/2, and will fail to perform name resolution.\nHow to DoH with BIND Enabling DoH in the development release of BIND is as easy as adding a couple of options to the BIND configuration file. Here is a simple example for a recursive resolver:\n# Private Key and certificate pair for TLS # can be omitted if TLS offloading is being used tls local-tls { key-file \u0026#34;/path/to/priv_key.pem\u0026#34;; cert-file \u0026#34;/path/to/cert_chain.pem\u0026#34;; }; # HTTP endpoint description http local-http-server { # multiple paths can be specified endpoints { \u0026#34;/dns-query\u0026#34;; }; }; options { listen-on port 53 {any;}; listen-on-v6 port 53 {any;}; allow-recursion {any;}; # default ports for HTTP and HTTPS http-port 80; https-port 443; # example for encrypted and unencrypted DoH # listening on all IPv4 addresses. # port number can be omitted listen-on port 443 tls local-tls http local-http-server {any;}; listen-on port 80 http local-http-server {any;}; # the same for IPv6 listen-on-v6 port 443 tls local-tls http local-http-server {any;}; listen-on-v6 port 80 http local-http-server {any;}; }; Here https-port and http-port set the default port numbers for encrypted and unencrypted HTTP, for the case when a port number is omitted from listen-on statements.\nThe configuration format might seem a bit wordy considering that there is not much to configure in the DoH layer now, but we specifically decided to settle on this format because it is easily extendable.\nOffloading TLS processing To perform TLS offloading, one could consider using NGINX, but other options are also available: this is the one which we used during development. The main requirement is the ability to handle ALPN properly, as was stated earlier.\nIn order to perform TLS offloading for DoH, one can add the following to the NGINX configuration file (into the http {...} section):\n# address and port of the DoH server, serving unencrypted HTTP/2 upstream http2-doh { server 192.168.8.8:80; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name example.com; # domain name # TLS certificate chain and corresponding private key ssl_certificate /path/to/cert_chain.pem; ssl_certificate_key /path/to/private_key.pem; location / { grpc_pass grpc://http2-doh; } } Provided that the software was correctly configured, after starting it one should be able to use DoH via a domain name specified in the NGINX configuration. In the example above, the DoH endpoint address would be\nhttps://example.com/dns-query\nThe BIND 9 Administrator Reference Manual explains how to configure a BIND resolver to listen for DoH queries arriving over HTTP.\nSupport for DNS-over-HTTPS in BIND is an experimental feature not yet ready for production use. We encourage everyone who is interested in deploying it to try it now and report back any problems or give us suggestions if something is lacking. We would be happy to get as much feedback on it as we can!\nStill on our \u0026ldquo;to-do\u0026rdquo; list add client side support so dig can launch DoH queries add counters for DoH traffic (increment existing counters for TCP traffic for DoH queries) add system-level tests (we have unit tests already) remove the requirement to build with the nghttp2 library for users who don\u0026rsquo;t want DoH support consider enabling XFR over DoH (let us know if you would use this!) backport DoH when stable to a 9.16 version (April or May most likely) FAQs regarding DoH What is DoH? DNS over HTTP is a way to transport DNS queries and responses via HTTPS URIs, using the TLS security provided by HTTPS to encrypt those messages. DoH is defined by IETF RFC 8484 for communications between a DNS client and recursive resolver. It is an alternative transport to the long-time standard \u0026ldquo;native\u0026rdquo; DNS, carried in UDP packets and received over port 53.\nWhat problem does DoH address? What is the benefit of DoH? Ordinary unencrypted DNS queries send the information about the DNS name being queried in the open, leaking potentially sensitive information about the client. The DNS Privacy Project has a great summary of the issues DoH and DoT are addressing. DoH provides one way to encrypt the DNS queries from the client to the recursive resolver, thus keeping the content of this exchange confidential from the transport layer. It does not ensure that the answers from the resolver are authentic, as published by the domain authority.\nTo quote RFC 8484:\n“Two primary use cases were considered during this protocol\u0026rsquo;s development. These use cases are preventing on-path devices from interfering with DNS operations, and also allowing web applications to access DNS information via existing browser APIs in a safe way consistent with Cross Origin Resource Sharing (CORS) [FETCH].”\nThe original motivation for DoH was to offer the end user a way to bypass their access provider’s DNS system, and to prevent their access provider from snooping, blocking or monetizing their DNS query traffic. With DoH, these queries are directed at a specified server which may or may not be in the access provider’s infrastructure. The DoH queries are only visible to whomever is operating the DoH server. There was also a sense that tying the DNS to the HTTP/browser application would enable greater efficiencies, since most DNS lookups do lead to an HTTP connection.\nDoH References\nISC Blog on DoH - https://www.isc.org/blogs/doh-encrypted-dns/ DNS Privacy dot org - https://dnsprivacy.org/wiki/ is an excellent, authoritative resource on the different DNS privacy solutions and how they compare. Jan Schaumann’s blog on DoH, https://www.netmeister.org/blog/doh-dot-dnssec.html Excellent summary of one enterprise’s choice with respect to the DoH/Firefox controversy, as of September 2019 - from Tony Finch of Cambridge University, https://www.dns.cam.ac.uk/news/2019-09-19-dont-use-application-dns.html NSA (US National Security Agency) brief on \u0026lsquo;Adopting Encrypted DNS in Enterprise Environments\u0026rsquo;, January 2021, https://media.defense.gov/2021/Jan/14/2002564889/-1/-1/0/CSI_ADOPTING_ENCRYPTED_DNS_U_OO_102904_21.PDF What are the Concerns About DoH? Increased Network Security Risk\nOne of the significant characteristics of DoH is that it makes DNS traffic indistinguishable from the much higher volume HTTP traffic. In well-managed networks, DNS is used as a control point to identify and block suspected abuse or unwanted traffic, using RPZ or other mechanisms for DNS filtering. DoH renders these measures ineffective.\nThere have also been exploits discovered in which DoH is used specifically to hide and transport malicious code.\nDNS Provisioning\nIn traditional DNS (Do53) the client is provisioned with the address of a DNS service, typically in a DHCP option when the client obtains its IP address. This information is obtained and provided at the operating system level. With DoH, some browser implementations wanted to provide for a DNS setting at the application level, that could be different from the operating system setting.\nOne significant issue with the design of DoH is how to configure the client with an appropriate DoH server. Several web browsers are shipping with a few DoH providers helpfully pre-loaded, the same way they might pre-configure Google as the default search engine. The IETF Working Group on Adaptive DNS discovery is working on specifying methods for client discovery of DoH servers.\nCentralization of the Internet\nThere is a significant and well-founded concern that adoption of DoH will lead to more centralization of DNS query traffic, which in turn is bad for freedom and privacy on the Internet.\nCentralized DoH is bad for privacy - RIPE NCC Blog by Bert Hubert, https://labs.ripe.net/Members/bert_hubert/centralised-doh-is-bad-for-privacy-in-2019-and-beyond\nGiven all this controversy, why did ISC implement DoH in BIND? We try to implement the major DNS RFCS in BIND, so that it can be a reference implementation, to the extent possible. It is not always possible for us to implement every RFC but we felt this one was consequential. We want to enable people to run services on the Internet, and implementing DoH is part of that. We cannot prevent the centralization of control on the Internet, but we can provide open source that enables operators to offer competitive services.\n(FAQs added by Vicky Risk, any mistakes are hers)\n","permalink":"/blogs/bind-implements-doh-2021/","tags":["BIND","Privacy","DoH"],"title":"BIND Implements DoH"},{"categories":["BIND","Open Source","Events"],"contents":"Practical BIND 9 Management BIND 9 can produce an extensive array of log messages and metrics. Experienced DNS administrators plan ahead to capture the most important metrics and logs, and to leverage available open source tools for capturing and analyzing them.\nThis series of five webinars, presented by Carsten Strotmann, will cover best practices and tips for monitoring and managing your BIND 9 deployment. Sessions 1, 2, and 3 are appropriate for both Authoritative and Recursive applications. Sessions 4 and 5 are primarily focused on Authoritative applications.\nRegister once for the entire series: all sessions will be recorded and the recordings posted within a few days on our website and our YouTube channel, so you can easily review any sessions you miss. Each session will last approximately one hour, including Q\u0026amp;A as time permits.\nSessions are scheduled at 8 AM PST, which is 16:00 UTC. However, Daylight Savings Time ends in the US on March 14 and not until March 28 in Europe. Please check your invite from Zoom for the correct time in your timezone.\nSession Topics:\nFeb 17: Monitoring, part 1 - Setting Up, Managing and Using Logs Recording\nBest practice: BIND 9 log template for an authoritative server Best practice: BIND 9 log template for a DNS resolver File system best practices for BIND 9 log file handling BIND 9 log analysis with Linux Systemd Identifying hot-spots in BIND 9 logfiles with histograms Condensing information from large BIND 9 logfiles with \u0026ldquo;logeater\u0026rdquo; tools Monitoring BIND 9 runtime information with \u0026ldquo;atop\u0026rdquo; Event-based BIND 9 log and system analysis Incident-based DNSTAP and TCPDUMP traffic capture Note the templates Carsten recommends can be found at: https://webinar.defaultroutes.de/webinar/bind9-logging-template.html\nISC Knowledgebase article that may be useful to review as well:\nBIND logging, some basic recommendations March 17: Monitoring, part 2 - Long-Term Statistics Monitoring and Log Analysis Recording\nIdentifying outliers in BIND 9 logfiles BIND 9 monitoring from the \u0026ldquo;named.stats\u0026rdquo; file BIND 9 monitoring with the statistics channel Using open source tools to store and display metrics Using open source tools to search and analyze logs BIND 9 logs and remote syslog best practice Best practices for metrics to monitor for authoritative and recursive ISC Knowledgebase articles that may be useful to review ahead of time include:\nBest Practices for Recursive Best Practices for Authoritative April 21: Load Balancing With DNSdist Recording\nInstallation and configuration Applications for DNSdist Aggregating metrics across a cluster Cache concentration Load balancing for authoritative Load balancing for resolver slides This webinar included a half hour hands-on lab, after the main presentation. The VMs used by the lab are no longer available, but the instructions are still available.\nMay 19: Dynamic Zones, part 1 - Basics Slides, Recording\nDynamic zones vs. static zones Provisioning, updating dynamic zones June 16: Dynamic Zones, part 2 - Advanced Topics Slides, Recording\nTSIG Updating ACLs with \u0026ldquo;grant\u0026rdquo; Catalog zones ISC is the developer and maintainer of the BIND 9 software system. This work is supported by software support contracts; no purchase is required to attend these training sessions or to download and use the open source. If your company or organization is interested in purchasing professional support for BIND 9 and supporting ISC\u0026rsquo;s development work, please complete our contact form.\nSpeaker Information: Carsten Strotmann has 20 years of professional experience working in open source networking. He has delivered many webinars and training classes on open source solutions and DNS/DHCP/IPv6/Linux/Unix security. He also is the author of various articles on IT security topics in computer magazines.\n","permalink":"/blogs/bind-management-webinar-series-2021/","tags":["BIND","Webinars","Events","Training"],"title":"Managing BIND 9 - Webinar Series"},{"categories":["ISC","Company News"],"contents":"While sitting at your desk, hard at work managing your BIND server or your Kea DHCP instance, have you ever thought to yourself, \u0026ldquo;Oh, how I wish I had an awesome ISC shirt to wear, and a cool ISC hat to keep my head warm, and a beautiful ISC mug to hold my coffee\u0026rdquo;?\nNo? Maybe it\u0026rsquo;s just us.\nBut many of our software users do recognize the importance of supporting ISC\u0026rsquo;s open source development. We thought it would be fun to also give our friends a more tangible way to show their support for ISC. Plus, with the pandemic preventing in-person tech conferences, our supply of t-shirts is getting stale.\nIntroducing\u0026hellip;\nThe ISC Swag Store! By purchasing items from our swag store, you can look great and feel great too! When you buy from us, you\u0026rsquo;ll help finance our non-profit mission and contribute to a more open Internet for everyone.\nMaybe you\u0026rsquo;d like a t-shirt that says \u0026ldquo;One name-d(a)emon to rule them all and in the darkness BIND them\u0026rdquo; (get it?) in the Tengwar script from Lord of the Rings. Or perhaps you\u0026rsquo;re a fan of the late Evi Nemeth\u0026rsquo;s 9-layer OSI model. Or it could be that you want a nice-looking sleeve for your laptop. All these and more are available now!\nPlease visit https://shop.isc.org/ and make your purchase today to help support ISC. And please let us know if there\u0026rsquo;s an item you\u0026rsquo;d like that isn\u0026rsquo;t yet available: we will add more!\nAnd of course, many thanks for your support.\n","permalink":"/blogs/isc-swag-store/","tags":["swag","store","open source"],"title":"ISC's Swag Store"},{"categories":["Open Source","Kea","Tools"],"contents":"There is nothing more satisfying than seeing that users are extending and modifying your open source project. The highest compliment you can get, is when someone else is willing to invest their time and creativity into improving on something you have published. This list below is an assortment of projects we have stumbled across, or been told about, that may be of use to Kea users. In general we have not tried these, and we cannot comment on their \u0026lsquo;quality\u0026rsquo; but that is always a function of time - if you use one of these, and like it, please consider giving the creator feedback and encouragement.\nDHCP Load Balancer This one is a category all by itself. This is not Kea-specific, but it is a pretty significant tool. BSD-licensed. Although it was written several years ago it appears to be currently maintained.\nDHCPlb - Load Balancer Engineers at Facebook wrote this dhcpv4/v6 load balancer and relay to use with the Kea DHCP server as an anycasted DHCP service. Kea Hooks Modules Please note that Kea hooks from ISC are republished with every new version of Kea. Some of these non-ISC hooks may only work with certain versions of Kea.\nName Description License Author Link Netbox kea-dhcp updater Creates static reservations for addresses in NetBox Apache 2.0 Brian Candler https://github.com/candlerb/netbox-webhook-kea-dhcp python3 Python3 interface for Kea hooks MPL-2.0 Dave Cole https://github.com/davejohncole/kea_python options_to_options This hook takes the value from different options in a DHCPREQUEST and inserts them into (other) options in a DHCPREPLY. Example usage is to take the value from an Option 82 string, and insert it in an Option 43 sub option, to direct the dhcp client to the correct config file in an auto provisioning scenario. ? Ola Thoresen https://github.com/Olen/kea_hooks pxe-replace4 This module utilizes the Kea hooks api to replace pxe options (siaddr, option 66, 67) via web request. BSD 3 clause Michael Gugino ​https://github.com/michaelgugino/kea-pxe-replace-mod runscript Hook that runs an external script at all hook points provided by Kea, passing information to the script as environment variables. MPL-2.0 Baptiste Jonglez ​https://github.com/zorun/kea-hook-runscript mac2ipv6 Generates and returns the same address as a device would have generated via SLAAC. A use case for this is to return a stateful-like address in a stateless network. MIT serverzone https://github.com/serverzone/Kea-dhcp-hooks userchk-ldap Provides a means to tag DHCP packets with a client class based on registration in an external LDAP database. MPL-2.0 CVUT https://github.com/cvut/kea-hook-userchk-ldap hook-opt82 The hook changes option 61 from option 82 (ip-port/mac-port) and generates flex-id from option 82. ? Alexander Dubnitskiy https://github.com/pilotsanya/kea-hook-opt82 dynamic-lifetime Kea dhcp hook for setting lifetime by IP address and MAC address MIT Shuta Eguchi https://github.com/egs33/kea-hook-dynamic-lifetime onelease-dhcp4 This is a simple hook to assign ONE lease for a client based on its HW (MAC) address. It will work properly only if clients have HW addresses generated in a particular way: the last four bytes represent an IPv4 address. Note: the hook is in src/hooks MPL-2.0 Petr Ospalý https://github.com/ospalax/addon-kea-hooks replace-serial Minimalistic Kea Hook to replace @SERIALNUMBER@ with option[43][4] Apache-2.0 Arnoud Vermeer https://github.com/funzoneq/kea-hook-replace-serial Other Useful-Looking Scripts Systemd Init Script In ISC\u0026rsquo;s gitlab you can find this contributed patch for systemd for use with Kea, developed by Rasmus Edgar Jørgensen. \u0026ldquo;Systemd handles the daemonization and actually seems to be a perfect fit for how Kea services behave atm.\u0026rdquo;\nIntegrate Kea with Netbox This blog post explains how to import available addresses from Netbox into Kea. Written by Michael Schwartzkopff.\nKea-telegraf-stats A script that re-formats the statistics output from the kea API into a telegraf-friendly format\nKea memfile-database Importer A script to make it easy to migrate from a csv-backed kea to a database-backed kea. From Munroe Sollog\nSplunk-enterprise-addon-for-Kea Developed by Mikael Bjerkeland.\nKea Prometheus Exporter Exports Kea metrics in the Prometheus Exposition Format. (Note that there is also an ISC-developed exporter in the Stork agent for Kea.)\nKea \u0026ldquo;Show Leases\u0026rdquo; Script\nEven More Miscellaneous DHCP Probe DHCP Probe attempts to discover DHCP and BootP servers on a directly attached Ethernet network. A network administrator can use this tool to locate unauthorized DHCP and BootP servers.\nAn excellent article posted on the Internet Society website, IPv6 Address Planning: Guidelines for IPv6 Address Allocation, by Tim Rooney\nSimple on-line subnet calculator\nIPv6 Address planner\ndhtest, a Linux dhcp traffic generator\nThese are a few of the Kea- and IPv6-related tools, websites, and books we have heard of. Please visit our DNS tools and DHCP tools pages if those are relevant to your needs.\nPlease note that it is your responsibility to check the licensing terms of any software you download. We have not tried many of these so we can\u0026rsquo;t make any specific claims about suitability or quality.\nKea References Source Code The main source code repository for Stork is in ISC\u0026rsquo;s GitLab: https://gitlab.isc.org/isc-projects/kea.\nThe Kea source code tarballs are available on the ISC downloads page as well as on the ISC FTP server. ISC is also providing RPM and deb binary packages for Stork. They are available in our package repositories on Cloudsmith.\nIssues \u0026amp; Mailing List We highly recommend the kea-users mailing list. We encourage users to test Kea and report back their findings on the mailing list or on our GitLab page.\nDocumentation The Kea Administrator Reference Manual is posted on Readthedocs.io.\nRelease Model \u0026amp; License Kea has monthly development releases and approximately annually, we release a new stable branch. The stable version is then updated as needed, when we have significant maintenance updates.\nKea is released under the Mozilla Public License, version 2.0: https://www.mozilla.org/en-US/MPL/2.0\nDownload \u0026amp; Installation The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/\nSupport We support development of Kea, Stork and our other software through the sale of annual support subscriptions. Contact our sales team discuss your BIND and Kea support requirements. Contact us for more information.\n","permalink":"/blogs/kea-user-tools/","tags":["Open Source","Kea"],"title":"Kea Tools Developed by Users"},{"categories":["ISC","Open Source","DNS","BIND"],"contents":"Dear BIND Users,\nAt the end of 2017 we announced a new BIND release model. We introduced a published development branch, and established a time-driven cycle of monthly maintenance releases on both development and stable branches. In the process we shortened the interval between new stable branches to 12 months. (Previously we released new stable branches at longer intervals.) We reaffirmed our commitment to extended support versions with 4 years of maintenance, and announced that 9.16 would be our next ESV. We released the 9.12 stable and 9.13 development branches in 2018, followed by 9.14 and 9.15 in 2019, and 9.16 and 9.17 in 2020.\nChanges for 2021 We plan to make a couple of significant changes to our established release model for 2021. We are not yet ready to create a new stable 9.18 branch at the moment. We are still making significant improvements to the current 9.16 branch, so a 9.18 branch would not provide any useful improvements from 9.16.\nWe will not create a new stable branch until 2022. Thereafter we are planning to release a new stable branch every two years, rather than every 12 months. We will continue making significant changes in 9.16, which would be an exception to our normal procedure. Typically we refrain from making any feature changes in an Extended Support Version (ESV). While we haven\u0026rsquo;t formally declared that 9.16 is now ESV, we have made it clear that this was our plan for quite a while. Since we are introducing new stable versions more slowly, this means that now we can provide extended ESV support for every stable branch. Since we are not creating a new stable branch, and we want to preserve our even/stable, odd/development numbering scheme, we also don\u0026rsquo;t plan to create a new development version in 2021; instead, we will continue development on BIND 9.17.x. The chart below shows the adjusted release plan going forward.\nWhy are we making this change? During 2019 and 2020 we embarked on some refactoring that was more ambitious than we had attempted previously, replacing BIND\u0026rsquo;s proprietary network interface with the popular libuv library. This was a project that, we eventually came to realize, required more than a year to complete, and in fact, we are still not done with it. As a result, in BIND 9.16, we currently have both the legacy and the new network interfaces, and depending on the operation and BIND\u0026rsquo;s role in the DNS (client or server), we use one or the other. For the majority of servers, this hybrid approach runs smoothly, but unfortunately for others it has introduced new points of contention or bottlenecks. We would therefore like to complete the replacement of all legacy networking code before moving into the next development cycle. We need another quarter or so to finish this work.\nIn addition, we had pledged to backport support for DNS over TLS (DoT) and DNS over HTTPS (DoH) to 9.16. We don\u0026rsquo;t want a long-lived branch to lack these new transports, but this also means non-trivial changes to 9.16.\nWhat does this mean for BIND users? The 9.16 version, which should be fully stable and in minimal-changes mode by now, is not yet as quiescent as an ESV would usually be at this point. We are continuing to support 9.11 through Q1 2022. More conservative users may wish to stay on 9.11 until mid 2021, before adopting 9.16.\nWe will honor our commitment to support 9.16 for four years from initial release. This does not change the support lifetime for 9.16, which is scheduled to end in Q1 2024. We will continue producing monthly maintenance and development releases, as we have been doing. Users will continue to have two open source branches suitable for deployment (9.11 and 9.16), the development branch for testing, plus two branches of the -S edition (again 9.11 and 9.16). We will continue to provide at least 9 months overlap between stable versions, allowing users ample time to migrate from one to the other. Finally, the best part about this plan is, since every stable version will now eventually become an ESV, users who wish to stay on one branch for several years will now be able to.\nWhat is the status of this change? We notified ISC support customers of this planned change in December, to give them a chance to comment. Since we have received no expressions of concern, we have updated the official ISC Software Support Policy (https://kb.isc.org/docs/aa-00896).\nWe think this change is the best way to support our users. If you have concerns or questions please direct them to me (email below) or if you are on bind-users you may discuss this there.\nVicky Risk, Product Manager\nvicky@isc.org\nReferences ISC Software Support Policy (https://kb.isc.org/docs/aa-00896) 2017 blog on BIND Release Model (https://www.isc.org/blogs/bind-release-strategy-updated/) Policy for Removing named.conf Options (https://kb.isc.org/docs/policy-for-removing-namedconf-options) ","permalink":"/blogs/2021-bind-release-model/","tags":["Open Source"],"title":"BIND Release Model Update"},{"categories":["BIND","DNS","DHCP","Open Source","Company News","Annual Reports"],"contents":"2020 is finally over, and we can assess the damage. We at ISC are thankful that we have emerged remarkably unscathed by either the pandemic or the related economic disruption. We will provide more details when we publish our 2020 Annual Report later in 2021, but for now please enjoy this brief slide show of the highlights and the more in-depth discussion below.\n2020 accomplishments When the pandemic broke out in the spring of 2020, we had no idea what the likely economic adjustment might mean for ISC. Because we are relatively small and don’t have a lot of options for increasing revenue if our support business drops off, we are vulnerable in a recession so we had to be conservative. Worried about the prospect of a worldwide economic slowdown, we cancelled all travel, instituted a hiring freeze, applied for a special small-business loan, and postponed staff raises.\nLuckily for ISC, Internet usage increased significantly during the pandemic and our support business has been strong. We sadly lost a couple of customers due to the impacts of the pandemic on their industries, but overall we have not seen any significant disruption in our support business. ISC gained a net of 12 new customers in 2020, with a significant increase in demand for Kea support.\nMoving out of our historic headquarters in Redwood City Our biggest single change in 2020 was closing our headquarters at 950 Charter Street in Redwood City, California. This was already going to be a big job, and it was further complicated by COVID-19. Moving meant shutting down ISC’s datacenter in the back of the old warehouse we had occupied for over 20 years.\nThe project required sunsetting the last remaining hosting and secondary name services we provided for impecunious countries and non-profits, and relocating ISC’s own equipment to professionally managed data centers at the Palo Alto Internet Exchange (PAIX) and Hurricane Electric in Fremont. With some sadness we realized we could no longer maintain a Stratum-0 GPS clock and so we decommissioned clock.isc.org, referring our users to nwtime.org. We returned the massive trailer with our backup power supply that had sat in our parking lot for years. Our wonderful facilities manager, Rory Doolin, who had been with ISC since the beginning, found new homes for old equipment, and he recycled and donated lightly-used furniture.\nWe had hoped to have an open house to mourn the passing of this gathering place and landmark, but that too was a casualty of the pandemic. Although giving up our historic building was emotional for many of us, we realized our resources could be better allocated elsewhere, and moving out has allowed us to direct funds to more effectively pursue our software development and deployment objectives.\nAll remaining paper documents were packed and shipped to the new headquarters and business office in Newmarket, NH, and our “offices” became 100% virtual, although that wasn’t a huge pivot for us since the majority of staff already worked from home.\nClosing our datacenter accelerated some of our projects to migrate to modern cloud-based applications for some of our non-core functions, including sales and finance. We migrated our Customer Relationship Management application from SugarCRM to Zendesk; for self-hosted applications, we continue to rely on open source solutions, including GitLab, Mattermost, Jenkins, etc.\nStaff Changes 2020 brought more staff changes than we have had in years. Seven ISC team members moved on, including one who retired after ten years with ISC and two whose jobs were tied to our building. We hired seven new full-time staff members: three developers for the BIND team, one for the DHCP team, a new Support Engineer, a Director of Finance and Accounting, and an Accounting Manager. We also brought on a part-time General Counsel. We tried to reach out to a diverse pool of applicants as part of our search process; we placed recruitment ads on Indeed, GlassDoor, LinkedIn, Flexjobs, Fossjobs.net, Twitter, Facebook, GitHub, Ada\u0026rsquo;s List, Women in Technology, and BlackJobs.com, and considered a total of 188 applicants.\nWebinars We gave 33 public webinars and conference talks, all of which are archived on our website. We offered a series of training talks on BIND and DNSSEC in the spring, and in the fall did a series on Using Kea. Between these two sets of programs, we held separate presentations on Kea performance with multithreading, troubleshooting with dig, DNS encryption, BIND and RPZ, DNS support in OpenStack, EDNS Client Subnet Identifier in BIND, and Comcast’s VinylDNS provisioning system. We had a record number of non-ISC guest presenters: Andreas Taudte, Joe Crowe and Paul Cleary, Stephan Lagerholm and Graham Hayes, Matt Stith, and Carsten Strotmann.\nF-Root The F-Root operations team made significant upgrades in 2020.\nThe F-Root management infrastructure was logically and physically separated from ISC’s corporate infrastructure, to improve security and allow for better cost accounting of the F-Root function. We added multiple new management tools for F-Root, including MRTG and smokeping. The team audited the configuration and stability of the systems and network connectivity at all the F-Root sites, which are located in IXes in many underserved areas around the world. We noted that IPv6 connectivity is improving. Peering is a crucial part of the F-Root Anycast network, but managing the more than 3300 active BGP peering sessions we have is also time-consuming. We are taking steps to automate our peering management. We migrated from Quagga to BIRD and centralized BGP route collectors for a global view of our transit and peering. We published many more RPKI \u0026ldquo;ROAs\u0026rdquo; to help secure our IP space against BGP hijacks. Ray Bellis produced a new Root System visualiser and announced it to the RSSAC Caucus list (https://atlas-vis.isc.org). ISC participated in the annual Day In the Life (DITL) data collection for OARC. In 2020 we added these new nodes to the F-Root system:\nChongqing, China, with local sponsor CAICT Qinghai, China, sponsored by CNNIC Hanoi and Ho Chi Minh, Vietnam, both sponsored by VNNIC Kaunas, Lithuania, sponsored by Kaunas University of Technology Riyadh, Saudi Arabia, sponsored by SAIX Moscow, Russia, sponsored by MSK-IX A second node in Chicago, US, sponsored by Chicago IX With the help and support of our local partners we were able to bring online refreshed hardware in Port-au-Prince, Haiti (ISC-funded); Podgorica, Montenegro (ISC-funded); Philipsburg, St. Maarten (ISC-funded); Hong Kong, China (sponsor HKIX); Osaka, Japan (sponsored by NTT); Kuala Lumpur, Malaysia (thanks to PPIM); Turin, Italy (ISC-funded); and Suva, Fiji (thanks to APNIC).\nAs of the end of 2020, F-Root consists of three \u0026ldquo;global\u0026rdquo; sites, 16 \u0026ldquo;classic\u0026rdquo; sites, and 42 \u0026ldquo;F-single\u0026rdquo; sites in service, in addition to over 200 F-Root instances hosted by Cloudflare. The global sites host 2x F-Root servers and F-Root management infrastructure. A classic site comprises 2x F-Root servers, a console server, routers, and switches. An F-single is our current configuration, which relies on a single 1U server to provide both the F-Root service and BGP Anycast. The classic sites are all expected to be phased out and upgraded to the F-single configuration by the end of 2022.\nBIND In 2019 we embarked on our most ambitious refactoring project ever, replacing the ancient native socket layer in BIND with a new open source component, libuv. This is a project we had put off for several years because of the complexity and risk of changing such a fundamental and, frankly, ancient part of BIND. Unlike the OSI-layer slideware we have all seen, the socket “layer” in BIND is not a neatly defined layer in the named daemon. Instead, it is - or was - fairly closely integrated into functions that manage tasks which have to be parked while waiting for responses expected over the network.\nThe 2019 work on the new network manager was not complete when we created the new 9.16 stable branch, and some users discovered problems. For a while the TCP performance on FreeBSD was much worse than on Linux, and performance on platforms without load-balanced sockets performance was abysmal. Additional refactoring, and re-writing TCPDNS support, improved both stability and performance, but we still haven’t completed the transition to the new network manager. Some operations still rely on the old BIND sockets.\nPartly as a result of this, we have decided to modify our BIND release model again, to lengthen the time between major branches and provide extended support for every stable branch. You can find out more in our blog post on the topic. This longer release cycle will enable us to continue to tackle complex refactoring projects. The next such project we have in mind is the rbtdb: BIND’s Red-Black tree, a critical data structure that is ripe for an overhaul.\nThe new network manager was a precondition for adding native support for the new encrypted transports DoH and DoT to BIND, so delays in completing the network manager caused us to miss our goal of shipping DoH support in 2020. However, we were pleased to release DoT in December; we see potential applications for DoT, particularly in the Enterprise. We plan to follow this up with further work on XoT, and of course finish and release DoH.\nOur technical support team encountered some issues related to BIND 9’s cache management and memory usage, reported by a few large and observant operators. We think we addressed some of these with improvements in our serve-stale implementation, but others may remain. It can be complicated to discover the root causes of cache problems. In particular, it is difficult to capture and recreate the conditions that triggered the problem in the lab. One of our major initiatives for 2021 is building better test capabilities to realistically simulate a heavily loaded resolver.\nWe made a lot of improvements in BIND quality assurance in 2020, which enabled us to maintain a predictable monthly release cadence while adding more test tools and more platform coverage to our GitLab continuous integration system. Automating the testing of the RPMs we have been producing also helped us keep up with the monthly updates. We published the gitlab-runner scripts we are using for BIND as open source, and our QA Manager even found time to contribute a feature to git (https://git.kernel.org/pub/scm/git/git.git/commit/?id=296d4a94e7231a1d57356889f51bff57a1a3c5a1).\nSignificant BIND accomplishments in 2020 BIND\u0026rsquo;s Key and Signing Policy tool, which automated DNSSEC maintenance, was improved several times in 2020, leading to a very nice assessment by Daniel Stirnimann of SWITCH that “BIND 9.16 has improved DNSSEC support to the point where it can (finally) be called simple to use.” (https://securityblog.switch.ch/2020/12/01/dnssec-signing-your-domain-with-bind-9-16/) DNS over TLS (DoT) was released in 9.17.7 for early adopters. We joined the Google OSS fuzz project, adding to our in-house fuzzing. We also implemented new thread sanitizer checks and addressed many issues raised through this process. We reformatted all our documentation, migrating it from DocBook to reStructuredText. The BIND Administrator Reference Manual (ARM) is now generated using Sphinx and published on Read the Docs. We updated the BIND DNSSEC Guide, moved it from GitHub to GitLab, reformatted it to reStructuredText, and have added it as an appendix to the ARM on Read the Docs, making it easier for us to maintain and for our users to find. We added an official BIND 9 Docker image to the collection of ISC-maintained operating system packages. We issued nine BIND CVEs in 2020, from 2020-8616 through 2020-8624. Details can be found in the BIND 9 Security Vulnerability Matrix in our Knowledgebase. This is the same number of CVEs we had in 2019.\nCVE-2020-25705 SADDNS was announced by researchers; it\u0026rsquo;s a way of exploiting ICMP implementations to make it feasible to mount a DNS poisoning attack on vulnerable resolvers. The fix is in the operating system, but this CVE prompted us to “tighten up” the application of DNS cookies to help prevent more spoofing-type attacks in the future.\nWe welcomed two new developers to the BIND team in 2020, with another joining on January 1, 2021. One of these new team members implemented the improvements to serve-stale and another has already taken over work on DNS over HTTPS (DoH), so we are expecting a strong 2021.\nDHCP - Kea, ISC DHCP, and Stork The DHCP team had two major accomplishments in 2020: we released the first multithreaded version of Kea, and we developed a new graphical dashboard called Stork. We also added another software engineer to the team, specifically focused on assisting with customer-support issues and questions.\nMultithreading Transforming Kea into a multithreaded application took nearly a year of focused effort, as we had to modify both Kea itself and a number of hooks. We also put effort into various benchmarking tests, to ensure that the result was much faster than the single-threaded version. Our motivation for this project was to push Kea to a significantly higher throughput, something our service provider customers were asking for. We achieved dramatic performance gains, which we shared in a webinar and KB article. Our High Availability hook is now the limiting factor in Kea throughput in most scenarios.\nStork The second big accomplishment was the development of our first project with a graphical user interface, Stork. While we plan to make Stork a management dashboard for both BIND and Kea, Kea was the clear priority. A number of greenfield carriers, mostly providing local fiber access networks, are using Kea. They have no incumbent management tools, and they wanted something like the Anterius dashboard we did as a Google Summer of Code project two years ago. We set out to build a monitoring dashboard that could also eventually serve as a platform for configuration management. The result was Stork, which was built with resources from the DHCP development team using Go and Angular, both new technologies for ISC. We are not \u0026ldquo;reinventing the wheel\u0026rdquo; but rather integrating with powerful open source management utilities, such as Prometheus and Grafana. Stork is still considered experimental, but is already quite useful and is approaching a 1.0 version in mid-2021. Before we declare it to be production-ready, we would like to finish implementing automated testing for the UI, and add the ability to view and search lease files.\nStork provides a quick view of Kea server status, and is particularly useful for monitoring High Availability status and pool utilization. Stork consists of an agent running on the application server, which discovers which ISC daemons are running, and a web server which displays current status of the machines and applications being monitored. This enables us to provide an integrated view of system resource usage and application activity, to support troubleshooting.\nOther Kea accomplishments In addition to these two main activities, the DHCP team has added many new features and tweaks to accommodate the rapidly growing number of enterprise ISC DHCP users migrating to Kea. Several of these fall into the category of features we have made more flexible to accommodate additional operational requirements: we added support for bootp and leasequery; expanded our DDNS, client classification, and host reservations support; added support for multiple IP addresses per reservation; and made improvements to our High Availability hook. We are also continuing to track IETF work on DHCP, including draft-ietf-dhc-v6only, although standards work on DHCP has dropped off considerably vs. prior years. We maintained a regular schedule of monthly Kea maintenance releases while supporting a growing support customer base. As we did with BIND, we reformatted our Kea Administrative Reference Manual to reStructuredText format and posted it on Read the Docs. Finally, we sponsored a series of free technical webinars to train new Kea users, developed and delivered by Carsten Strotmann.\nTowards the end of 2020 we started work on making Kea more secure, adding basic HTTP authentication and access controls on the remote management interface. Kea was designed to be run on a protected internal management network, so previously we recommended running a local HTTPS proxy in front of Kea to secure access, but we are seeing more users deploying large numbers of Kea servers, where this becomes impractical. We are planning to continue working on Kea security and on more granular access controls in 2021, as our support customer base expands from mostly service providers to include an increasing proportion of enterprise users, many of whom have enterprise-wide application security standards.\nISC DHCP We released versions 4.4.2 and 4.1-ESV-R16 of ISC DHCP. The software is now in minimal-maintenance mode, but we are still receiving and responding to pull requests and issues from open source users for ISC DHCP, particularly the client and relay. We did add one new feature: support for the new IETF draft, https://tools.ietf.org/html/draft-ietf-dhc-v6only-08. We have also addressed a number of issues submitted by users, with patches that are published in our GitLab repo.\nOther ISC activities ISC as a whole made a significant investment in producing pre-compiled images for BIND and Kea in 2020. There seems to be a general trend towards ready-to-use software, and we received a number of requests to add packages. We added new Kea packages to support ARM architectures and Alpine Linux, and we expect Stork users to rely primarily on packages rather than building from source. We also added a package for the Kea Migration utility, which is based on ISC DHCP. The BIND team published an official Docker image for BIND, and we have requests for the same for Kea. While many of our traditional BIND support customers prefer the control of building their own software, we are seeing new customers - Kea users in particular - who are sold on containerization.\nAn ISC developer wrote and contributed the BSD kernel implementation of ILNP, the Identifier Locator Network Protocol. This is an IETF draft that has significant potential in an IPv6-only environment, such as a datacenter, to simplify the creation of overlay networks.\nIn 2020 we implemented a Code of Conduct for communications in ISC-sponsored fora, including our user mailing lists and GitLab discussions. We would like to welcome new users, and encourage users who might feel intimidated to participate actively in the open source and Internet communities. We have struggled to find the resources to support Outreachy interns, but we welcome any ideas and suggestions about how we could engage with new or discouraged open source users. We believe enabling wide engagement in and “control” over the Internet is an important part of our mission.\nWe missed getting to travel and see each other and the rest of the Internet community this year. To make up for that loss, we produced and shared two ISC Pandemic Cookbooks (Spring, and Holiday), which were fun projects to maintain connections among our team and social media followers.\nRecognition We would like to recognize our significant technical contributors from the user community, including:\nEmanuel Almeida of Cisco Systems, Inc., who reported the BIND vulnerability that became CVE-2020-8620. Joseph Gullo, who reported the BIND vulnerability that became CVE-2020-8621. Dave Feldman, Jeff Warren, and Joel Cunningham of Oracle, who reported BIND CVE-2020-8622. Lyu Chiy, who reported BIND CVE-2020-8623. Joop Boonen of credativ GmbH, for reporting BIND CVE-2020-8624. Tony Finch, who helped us fix a subtle bug in the KASP feature in BIND. Aaron Thompson, who upgraded our native PKCS#11 EdDSA implementation to PKCS#11 v3.0 and added EdDSA support for OpenSSL. Paul Frieden of Verizon Media, who exported the BIND zone timers to the statistics channel. Graham Clinch, who reported a problem with named-checkconf. Alberto Fernández, who found a problem with adding zones with double quotation marks. Tobias Klein, who found and reported that DNS rebinding protection was ineffective when BIND 9 was configured as a forwarding DNS server. Niclas Rozenvik, who reported an issue in Kea and proposed a fix. Frederic Bor, who submitted a patch for ISC DHCP virtual interface support. Alexander Pyhalov, who submitted a fix for KeaMA on Illumos. Tim McLaughlin, who pointed out some spots we missed when we added support for SHA 256 / 512 in ISC DHCP 4.4.3. We also offer our thanks to:\nAndreas Taudte, Joe Crowe and Paul Cleary, Stephan Lagerholm and Graham Hayes, and Matt Stith, all of whom took the time to prepare and present webinars for our open source users. Our stalwart ISC DHCP community experts, Simon Hobson, Sten Carlsen, Bill Shirley, Bob Harold, Niall O’Reilly, Glen Satchell, and Gregory Sloop, who are helping a whole new generation of users with their ISC DHCP issues via the dhcp-users mailing list. Numerous other BIND users, Kea users, and ISC DHCP users, who provided expert advice to others on our user mailing lists. ISC staff could not possibly answer all these questions ourselves, not only because of the number of questions, but because we don’t have the depth and variety of operational experience our users have. We are grateful for these contributions of technical expertise. ","permalink":"/blogs/2020-accomplishments/","tags":["2020","ISC"],"title":"ISC's 2020 Accomplishments"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"Let\u0026rsquo;s face it: 2020 has been a terrible year for a lot of people in a lot of ways. But we here at ISC are an optimistic bunch, and we know we can get through this by being kind to each other and finding ways to stay positive. In that spirit, we offer our customers, users, and friends this Global Pandemic Holiday Cookbook, full of international recipes that make our families happy and keep us warm even when we can\u0026rsquo;t be together.\nWe hope you like it, and we wish everyone a better year in 2021! Please feel free to send any comments or feedback to marketing at isc.org.\nAnd if you missed our first pandemic cookbook, we hope you\u0026rsquo;ll check that out too!\n","permalink":"/blogs/2020-holiday-cookbook/","tags":["holidays","2020","ISC"],"title":"ISC's 2020 Global Pandemic Holiday Cookbook!"},{"categories":null,"contents":"ISC is a not-for-profit organization. We do not have an endowment, and we are not funded by \u0026ldquo;the government.\u0026rdquo; We employ full-time, professional software engineers to develop and maintain the code that you can download for free. In the past, ISC accepted donations from companies to publish links to their websites. We no longer accept these types of donations.\nWe maintain our operations by selling annual software support contracts to organizations who want to ensure both the stability of their mission-critical Internet infrastructure and the survival of ISC\u0026rsquo;s open source software model.\nTHANK YOU for supporting open source. ","permalink":"/weblinks/","tags":null,"title":"Support Open Source"},{"categories":["ISC","Company News"],"contents":"We are happy to present another short profile of one of our engineers, to let our customers learn a little more about us. We hope you find them as interesting as we do!\nWe are pleased to welcome Andrei Pavel as our new Quality Assurance \u0026amp; Support Engineer! He joins both the Support and DHCP development groups as a specialized resource.\nAndrei began working for ISC full-time on October 1, 2020, and he has hit the ground running. He has already produced a couple of very useful resources for our customers: a KB article on Host Reservations with an operator \u0026ldquo;decision tree\u0026rdquo; and a file with examples of every possible option definition for Kea. He was happy to come work for ISC because \u0026ldquo;It\u0026rsquo;s a versatile position that fits my \u0026lsquo;jack of all trades, master of some\u0026rsquo; frame of mind.\u0026rdquo; He says he\u0026rsquo;s always been a \u0026ldquo;proud script kiddie,\u0026rdquo; which is useful for his QA work, and he enjoys looking at issues from a customer\u0026rsquo;s point of view. Andrei, who was an active contributor to Kea before joining ISC, \u0026ldquo;quenches his thirst for C++\u0026rdquo; by continuing to work on Kea features as well.\nHis educational background is in both natural and computer sciences: he holds a Bachelor\u0026rsquo;s in Computer Systems Architectures from the Politehnica University of Bucharest and is pursuing a master\u0026rsquo;s degree in Advanced Software Services. His previous jobs include customizing Kea for a large German telecommunications company and software consulting for a Romanian technology firm. He says he enjoys working on open source software because \u0026ldquo;I find satisfaction in watching software grow into the best version of itself, and I think keeping software development in a public forum is a step towards that. Also, making the code available to the public contributes to the idea of freedom, which I support.\u0026rdquo;\nAndrei likes the flexibility of ISC\u0026rsquo;s working hours and the fact that we use the most up-to-date tools, as well as our fair group decision-making process. He says the bureaucracy of code reviews, triage, and unit tests here sometimes frustrates him, but he appreciates the fact that it all leads to solid, robust code. And he\u0026rsquo;s also happy to be working with his former colleague Razvan Becheriu again, after Razvan joined ISC in 2019.\nIn addition to his full-time job, Andrei does a lot of side projects and enjoys working in such computer languages as C++, D, Dart, Flutter, Go, and Rust. He says that once he saw \u0026ldquo;the complex features that C++ offers, I became a language-oriented person and I wanted to know how each compares with other languages. I was even more amazed when I was introduced to Go, which maintains performance but makes concurrent programming a breeze. Rust changes the paradigm yet again by introducing affine types, which is like using unique_ptr for all pointers in C++. Finally, in my attempt to become a full-stack developer, I searched for a frontend framework. I stumbled upon Dart and Flutter, which promise \u0026lsquo;one-code, all-platforms\u0026rsquo;: mobile, web, and even the forsaken desktop. And they offer all the consistency you could dream of.\u0026rdquo; He\u0026rsquo;s even written his own Android app in Flutter as a tribute to a Romanian radio station.\nAndrei\u0026rsquo;s positive attitude and his \u0026ldquo;attention to seemingly non-significant details in software development, which turn out to be significant at scale,\u0026rdquo; are already making him an important addition to ISC! He and his fiancee live in Bucharest and hope to get married next year, once the coronavirus is under control and they can have a proper celebration.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org and look forward to bringing you more employee profiles in the future.\n","permalink":"/blogs/isc-profile-pavel/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Andrei Pavel!"},{"categories":["ISC","Open Source","DNS"],"contents":"The Day the Internet Broke The successful massive DDoS attack on Dyn during the fall of 2016 caused some popular sites - including Amazon, Netflix, and Twitter - to be unreachable for hours, because Dyn was unable to answer authoritative DNS queries. That attack shocked a lot of people on the Internet, because it was such a massive DDoS and it successfully brought down a well-managed DNS service. At the time, a lot of DNS administrators clamored for a “solution.” Some of BIND 9’s users hoped that a feature that allowed BIND to continue serving stale content could help them ride out another prolonged successful DDoS attempt against a significant authoritative provider.\nThe serve-stale feature was added in the BIND 9.11.4-S subscriber edition in 2018, and then included in the open source as of BIND 9.12.0. At the time, there was no Internet standard for how to serve stale answers, but there were a few early draft proposals, including https://tools.ietf.org/html/draft-wkumari-dnsop-ttl-stretching-00, followed by https://tools.ietf.org/html/draft-tale-dnsop-serve-stale-02.\nEventually, RFC 8767 was standardized in March 2020. This article explains the differences between the initial BIND 9 implementation and the updated implementation in BIND 9.17.7 and 9.16.9, following the standardization of RFC 8767.\nOriginal Serve-Stale Options In mid-2018, when ISC released our first implementation in BIND 9, there were two deployed implementations that were discussed at the IETF: one from NLnet Labs’ Unbound, and one from Akamai.\nThe Unbound Method Find an answer in cache. Then:\nRegardless of whether the answer has expired, serve it (if the answer has expired, serve it with TTL=0). If the answer rdataset has expired or is expiring, start a prefetch for it. If no answer is found, perform recursion. This method prioritizes fast answers from cache, as explained in this Unbound blog.\nThe Akamai Method Find an answer in cache. Then:\nIf the answer has not expired, serve it, starting a prefetch for it if it is expiring. If the answer has expired, start a fetch for it. If the fetch is taking too long, serve the stale answer while the fetch continues to run to timeout. If no answer is found, perform recursion. In essence, if a record in the cache has expired, the Unbound method serves the stale record (with a TTL of 0), while at the same time initiating a fetch for it. The Akamai method issues the fetch first, and only serves the stale record if no answer is received. Akamai had been using this implementation internally and found it helped them during the original attack on Dyn.\nISC discussed both options with some BIND users who were asking for this feature, and the preference at that time was for the Akamai method. Because Akamai had applied for patent protection on their method, and they had already implemented their algorithm in BIND, we asked them if they would donate their patch to ISC. After some internal review at Akamai, they generously contributed the implementation they had been using in-house, which was based on an earlier release of BIND, to ISC.\nInitial BIND Implementation There were three options provided with BIND 9’s initial implementation of serve-stale, with a fourth (stale-cache-enable) added more recently:\nstale-answer-enable : If yes, enable the returning of “stale” cached answers when the name servers for a zone are not answering. The default is not to return stale answers (answering from stale cache can also be enabled and disabled dynamically by the BIND server administrator via rndc serve-stale on|off). max-stale-ttl : If stale cache is enabled, max-stale-ttl sets the maximum time for which the server retains records past their normal expiry to return them as stale records, when the servers for those records are not reachable. The default is 12 hours (initially implemented as one week but reduced in the June 2020 BIND maintenance releases). stale-answer-ttl : This specifies the TTL to be returned on stale answers. The default was originally set at 30 seconds. As of the 9.16.6 and 9.11.22-S1 update, it defaults to one second. By the time this feature was available in the stable open source version of BIND, the industry had adapted to the threat of massive DDoS by diversifying. Since the 2016 Dyn outage, many authoritative providers have contracted for a strong secondary presence. Perhaps because of that trend, the Internet has not seen another authoritative DNS outage on that scale since then. So the original goal of preserving answers in case of an hours-long outage is no longer as important as it once was.\nRecently, ISC has received some complaints that our serve-stale implementation is not efficient in production. Every client that asks for a record that is stale but still available in cache waits for a lengthy timeout, as BIND re-queries the authority before sending the stale answer. That was part of the original design, which was to serve as a last resort in case of a lengthy outage, but it provides a slower response.\nThe timeout BIND uses is based on an option called resolver-query-timeout. The default value of this timeout is 10 seconds, but it can be configured from 301 msec to 30 seconds. Although 10s is a very long time for most browser clients to wait for a response, we don\u0026rsquo;t recommend reducing resolver-query-timeout below 10 seconds in most operational environments, as this is known to cause a higher rate of SERVFAIL responses to clients due to lack of time to progress cache-loading of complex answers.\nAnother observation was that we had not provided operators with an option to disable the stale cache; in response to this we added the following parameter in 9.16.6 and 9.11.22-S1:\nstale-cache-enable: If yes (the default), enables the retention of expired cache records so that they are available to be returned from cache if either stale-answer-enable is set to yes, or is switched on later using rndc serve-stale on. At the same time, we made another tweak. As of 9.16.6 and 9.11.22-S1, answers that are received with TTL=0 are ineligible for serve-stale.\nRevised BIND Implementation Prioritizes Faster Responses As of the releases of BIND 9.17.7 and 9.16.9 in November 2020, we have revised our implementation more significantly to prioritize faster responses. BIND now replies with the stale answer in cache immediately if an attempt to refresh the RRset has previously failed, and continues to provide the stale answer for an amount of time specified by stale-refresh-time. After that stale-refresh-time has expired, the stale answer is regarded as \u0026ldquo;unusable\u0026rdquo; and is not served. Specifically:\nstale-refresh-time: The period of time that BIND serves a stale answer. The default stale-refresh-time is 30 seconds, as RFC 8767 recommends. A value of zero disables the feature, meaning that normal resolution takes place first, and named returns \u0026ldquo;stale\u0026rdquo; cached answers only if that fails. (A value of zero results in the same behavior as the original BIND serve-stale feature.) This enhancement speeds up responses for nearly all of the users that are in need of a stale answer: the very first user that queries for a record that has just become unavailable from the authority will still have to wait for the query timeout, but all the subsequent users will get the stale answer from cache. There is another option we would like to add, the stale-answer-client-timeout below, which will help this first user. The \u0026ldquo;first user\u0026rdquo; may also be impacted by fetch limits (more on this topic below).\nCompliance with RFC 8767 These recent changes do not make the BIND 9 implementation completely compliant with RFC 8767; we have two relatively small changes left to implement:\nAdd stale-answer-client-timeout, which is the maximum amount of time a recursive resolver should allow between the receipt of a resolution request and the sending of its response (only to be used if stale-answer-enable is set). https://gitlab.isc.org/isc-projects/bind9/-/issues/2247 Update the defaults to the RFC 8767 recommended values. https://gitlab.isc.org/isc-projects/bind9/-/issues/2248 Parameter Current default RFC 8767 stale-answer-ttl 1 second 30 seconds max-stale-ttl 12 hours 1-3 days stale-refresh-time 30 seconds 30 seconds or higher Rate-Limiting and Serve-Stale Unfortunately, cache cleaning and cache maintenance are very complex topics. We have a more detailed Knowledgebase article on the BIND implementation of serve-stale and its interactions with other BIND features, but here are the most important points.\nHow does serve-stale interact with fetch-limits? Fetch-limits allow you to rate-limit the number of requests for a specific zone or to a specific server. Fetch-limits were implemented as a mitigation for the pseudo-random subdomain DDoS attack. The reason we are concerned about fetch-limits is that the primary use case for serve-stale is when the authoritative server is unavailable due to a successful DDoS.\nWhen a query is dropped due to fetch-limits, before sending SERVFAIL or DROP (depending on what\u0026rsquo;s configured in fetch-limits), we\u0026rsquo;ll look to see if there is stale data we could send instead. A query dropped due to fetch-limits won\u0026rsquo;t activate \u0026lsquo;stale-refresh-time\u0026rsquo;, as this is not considered a real failure in contacting the name servers in an attempt to refresh the given RRset.\nThe fetch-limits implementation does not block all requests. Some will succeed in bypassing the rate-limiting process. As soon as a refresh has been attempted (and failed), the stale-refresh-time window will be activated. The reduction in client queries due to serving of stale data should also help to increase the likelihood that subsequent refresh attempts will not be blocked by fetch-limits.\nServe-stale and prefetch Prefetch, implemented first in BIND 9.10, is a technique for refreshing the cached information for popular data, even without a pending query. The theory is, if this is information that is frequently requested, BIND can anticipate that it will be needed again soon.\nIn the case of prefetch, the client request prior to the RRset expiry initiates an early refresh of the cache content. Client queries received during the period that stale-refresh-time is active, however, do not initiate an early attempt to refresh the stale RRset.\nThe following section was updated in March 2021 to reflect changes made in BIND 9.17.11; 9.16.13 and 9.16.13-S1\nServe-stale and negative answers If there is a stale NXDOMAIN or NXRRSET in cache, BIND returns it only if the resolver query times out (stale negative data will not be returned on stale-answer-client-timeout). Although stale-answer-client-timeout is not used to provide an early response to clients from negative stale cache RRsets, once a refresh of these RRs has timed-out the client will receive the negative stale cached answer, and the stale-refresh-time will be started so that subsequent client queries will receive the negative stale response immediately.\nRRset Aging With serve-stale, BIND now has four stages in the aging of an RRset:\n\u0026ldquo;active\u0026rdquo; = within the published TTL. This is a fresh product, analogous to yogurt with a \u0026ldquo;sell by\u0026rdquo; date in the future. “expired” = past the published TTL. This is analogous to yogurt that is past its “sell by” date, but may still be edible. (\u0026ldquo;expired\u0026rdquo; includes both \u0026ldquo;stale\u0026rdquo; and \u0026ldquo;ancient\u0026rdquo; RRsets.) \u0026ldquo;stale\u0026rdquo; = \u0026ldquo;expired,\u0026rdquo; but not by more than max-stale-ttl seconds. This is analogous to yogurt that is past its “sell by” date, but may still be edible because it is not yet past its \u0026ldquo;use by\u0026rdquo; date. \u0026ldquo;ancient\u0026rdquo; = \u0026ldquo;expired\u0026rdquo; by more than max-stale-ttl seconds, which also means that it\u0026rsquo;s ready to be removed from the cache as soon as the references, locks, and opportunity allow. This is analogous to spoiled yogurt that is inedible, but has not yet been thrown away. On servers that do not have stale-cache-enable yes;, all \u0026ldquo;expired\u0026rdquo; cache content is \u0026ldquo;ancient\u0026rdquo; (the yogurt\u0026rsquo;s \u0026ldquo;sell by\u0026rdquo; and \u0026ldquo;use by\u0026rdquo; dates are the same). Real-Network Scenarios Since we are discussing the design of serve-stale, Cathy Almond, our Support team lead, raised questions about interactions with rate-limiting under two scenarios.\nScenario 1: This resolver is innocent and is sending only queries for “good” names, but because the authoritative server/zone is under attack, it is not reliable. If there are RRsets available, does BIND respond from stale cache instead of SERVFAIL or DROP?\nIn this case, the revised implementation performs better than the prior implementation for almost all users. BIND sends any relevant answers from cache, but may not attempt to refresh the data if the server or zone is subject to fetch-limits. The fact that fetch-limits are active is a clear indication that there is a problem with getting answers for queries from these servers or from this zone.\nScenario 2: This resolver is participating in the attack via its compromised botnet clients. Note that in a Pseudo-Random Subdomain (PRSD) attack, the resolver receives queries for a series of apparently random, unique names. Because these are generated, non-existant names, the response should be NXDOMAIN. This is the scenario that fetch-limits was implemented for, and fetch-limits should apply here.\nIn this case, BIND sends SERVFAIL if a new query is attempted and fails. If fetch-limits are triggered then it will SERVFAIL or DROP the query, depending on the configuration. There is unlikely to be an eligible stale answer in cache to serve instead (but if there was one, it would be used). This is exactly the behavior desired.\nReferences Wikipedia article on Dyn attack: https://en.wikipedia.org/wiki/2016_Dyn_cyberattack Original serve-stale draft: https://tools.ietf.org/html/draft-wkumari-dnsop-ttl-stretching-00 Second serve-stale draft: https://tools.ietf.org/html/draft-tale-dnsop-serve-stale-02. RFC 8767: https://tools.ietf.org/html/rfc8767 Blog on Unbound implementation: https://medium.com/nlnetlabs/some-country-for-old-men-7b9add7820c9 ISC Knowledgebase article with further BIND implementation details: https://kb.isc.org/v1/docs/serve-stale-implementation-details ","permalink":"/blogs/2020-serve-stale/","tags":["Open Source","Security"],"title":"Serve-Stale Update"},{"categories":["ISC","Open Source","DNS"],"contents":"In November 2020 some DNS researchers at Tsinghua University and the University of California, Riverside published a paper on a new way to poison the cache of a DNS resolver.\nThis Cloudflare blog describes the attack well. It is a relatively complex attack that exploits predictable rate limiting in the operating system to improve the odds of spoofing a connection. The most straightforward remedy is to update the operating system to make ICMP rate limiting unpredictable. The maintainers of the Linux Kernel have already published a patch making the attack less feasible and it is likely that other operating systems affected will do something similar.\nThis is really an operating system vulnerability, but the effect is to expose DNS resolvers to the possibility of cache poisioning by an attacker spoofing a valid connection. BIND 9 already implements a number of effective mechanisms to defend against spoofing attacks. These include source port randomization and queryID randomization (randomized per port) and connected UDP streams. We implemented RRL (response rate limiting) years ago, which can limit the ability of an attacker to iterate over all the possible ports and queryIDs.\nDNS cookies make spoofing attacks much more difficult. BIND will store cookies for any server it has previously communicated with. Servers who have provided BIND with cookies will be protected from spoofing attacks.* Connections from known servers with valid cookies can be exempted from rate limiting or other protective measures, making them more efficient. Once there is a critical mass of support for cookies across the DNS, we can easily implement a \u0026lsquo;stricter\u0026rsquo; cookie requirement in BIND that, for example, would require TCP if a valid cookie was not provided. DNS cookies are enabled by default in BIND 9, and according to our research, approximately 17% of TLDs (top level domains) already support cookies. This is a significant minority of servers, but not enough yet to automatically distrust every communication without a valid cookie. Wider deployment of cookies would go a long way to raising the cost of this type of attack.\nHowever, the best overall defense for cache poisoning is to implement DNSSEC. Authoritative operators can establish the authenticitiy of their zone data by signing it, and resolver operators who validate with a strict policy avoid distributing invalid data to users. DNSSEC works to prevent damage due to cache poisioning no matter how it is accomplished, where as the randomizing the ICMP rate limiting fix only prevents this specific vulnerability.\nRecommendation for BIND users In summary, we think this vulnerability is fairly difficult to exploit, but if you are operating a popular open resolver, you should certainly mitigate it. Look for the kernel patch in your chosen operating system or apply the temporary workaround suggested by BlueCat.\nWe urge more implementers and operators to adopt DNS cookies - they are a lightweight defense and don\u0026rsquo;t add any additional round trips or communications overhead. In the longer term, this attack is another compelling reason to implement DNSSEC. Those operators who have implemented DNSSEC already can take a minute to congratulate themselves, and perhaps alert their management that they have cleverly avoided this vulnerability.\nwhile we are proponents of DNS cookies for their lightweight efficiency, we acknowledge they are still vulnerable to reassembly attacks. This problem is addressed by an IETF draft, Defeating DNS/UDP Fragmentation Attacks. We welcome DNS and security research into possible vulnerabilities in our software or the global DNS system and we encourage coordinated disclosure. To report a suspected security vulnerability responsibly please consult this page or simply email us at security-officer@isc.org.\nReferences\nWeb site set up by the researchers to publicize their attack https://www.saddns.net/ USENIX Paper https://www.usenix.org/conference/usenixsecurity20/presentation/zheng Full technical paper https://www.cs.ucr.edu/~zhiyunq/pub/ccs20_dns_poisoning.pdf ACM CCS Distinguished Paper Award https://dl.acm.org/doi/10.1145/3372297.3417280 Cloudflare blog on the attack https://blog.cloudflare.com/sad-dns-explained/ UC Riverside article on the research https://news.ucr.edu/articles/2020/11/11/dns-cache-poisoning-ready-comeback ArsTechnica article on the attack https://arstechnica.com/information-technology/2020/11/researchers-find-way-to-revive-kaminskys-2008-dns-cache-poisoning-attack/ Tech Republic article on the BlueCat workaround https://www.techrepublic.com/article/how-to-temporarily-mitigate-sad-dns-for-linux-servers-and-desktops/ NLNET Labs blog on the issue https://blog.nlnetlabs.nl/sad-dns-side-channel-attack-and-unbound/ Nov 15, 2020 CORRECTION:\nThe initial version of this blog stated that the researchers did not notify ISC before publication. It turns out that they did share a copy of their article with ISC prior to publication, in response to an inquiry about another paper.\n","permalink":"/blogs/2020-saddns/","tags":["Open Source","Security"],"title":"SAD DNS - Side channel attack"},{"categories":null,"contents":"Development of the DHCP Protocol The Dynamic Host Configuration Protocol (DHCP) was first defined as a standards-track protocol in RFC 1531 in October 1993, as an extension to the Bootstrap Protocol (BOOTP), a network protocol used by a network client to obtain an IP address from a configuration server. The motivation for extending BOOTP was that BOOTP required manual intervention to add configuration information for each client, and did not provide a mechanism for reclaiming used IP addresses. Many worked to clarify the protocol as it gained popularity, and in 1997 RFC 2131 was released, and remains the standard for IPv4 networks. To support the IPv6 protocol, DHCPv6 was introduced and documented in RFC 3315.\nDHCPv6 is not just DHCPv4 but for IPv6 addresses. DHCPv6 is a significantly different protocol. For example, the primary client identifer in DHCPv4 is the MAC address: in DHCPv6 the primary identifier is DUID. In DHCPv4 there is an assumption that every device will have exactly one address. DHCPv6 devices can request multiple addresses. RFC 3633 added a DHCPv6 mechanism for prefix delegation, a powerful feature that has no equivalent in DHCPv4. DHCPv6 was further extended to provide configuration information to clients configured using stateless address auto-configuration in RFC 3736.\nCurrently there are close to 140 DHCPv6 options defined, with many more undergoing the standardization process. To see a list of approved options and message types, please visit IANA\u0026rsquo;s website.\nISC Activities in the IETF ISC has participated vigorously in the IETF specification process for DHCP over the years. A number of the major RFCs for the DHCP protocol were authored by current and former ISC staff, including several DHCPv6 failover drafts, both the DS-Lite and DHCP 4o6 transition drafts, and the massive RFC 8415, a DHCPv6 \u0026ldquo;bis\u0026rdquo; specification document that took eight authors four years to complete (https://www.isc.org/rfcs/). Tomek Mrugalski was co-chair of the DHC working group in the IETF for many years, before stepping down in 2020.\nISC\u0026rsquo;s DHCP Implementations ISC currently maintains two major DHCP software systems. The ISC DHCP software system was originally written for Internet Software Consortium (later renamed Internet Systems Consortium) by Ted Lemon and Vixie Enterprises as a reference implementation for the new DHC protocol. The ISC DHCP distribution includes a client, relay, and server. Kea is a completely new DHCP implementation from ISC that is intended to eventually replace the aging ISC DHCP. Both implementations adhere faithfully to the IETF standards for the protocol and both are open source.\nISC DHCP, aka dhcpd ISC DHCP 1.0 was released in June 1998 and was followed a year later with 2.0. Version 3.0 was posted in 2001 and not replaced with 4.0 until 2007. The Version 3.x branch added support for the IETF failover standard and asynchronous DDNS updates; version 4.0 added IPv6 support. All versions prior to 4 are now EOL. ISC is still supporting 4.1.x because it is a smaller footprint than the more recent versions, and is useful where a small codebase is important, but most users are advised to use the much newer 4.4 version. The project switched from the ISC license to the MPL2.0 license with version 4.4.1. The Wikipedia article on ISC DHCP provides a good description of the software\u0026rsquo;s key features.\nSince 2004, a dedicated ISC engineering team has been maintaining and developing ISC DHCP. Former ISC employees Ted Lemon and Shawn Routhier made significant contributions to ISC DHCP\u0026rsquo;s development over many years. Thomas Markwalder has been the principal maintainer since 2016; Francis Dupont has been a significant contributor to ISC DHCP since 2007 and still participates in ISC DHCP\u0026rsquo;s maintenance. In addition to these core team members, community contributors have contributed an LDAP lease backend and a script for displaying leases. Several of the operating system packagers of ISC DHCP actively contributed issue reports and patches over the years. ISC DHCP has been widely embedded in other products, including home gateways, routers (relay implementation), and IPAMS. Many of the *nix operating system DHCP clients are believed to be based on the ISC DHCP client code, although they have diverged from the ISC open source distribution over the years. ISC hosts a community support mailing list, dhcp-users@lists.isc.org, where experienced users provide free advice and support.\nIn 2019, ISC released a Kea Migration utility, essentially a modified version of ISC DHCP written by Francis Dupont, that helps users migrate a configuration file from ISC DHCP to Kea by translating the common elements to Kea configuration syntax.\nThe Kea DHCP server The BIND 10 application framework was designed originally to support multiple applications, specifically DNS and DHCP. When ISC ended active work on the DNS portion of BIND 10 in 2014, the DHCP application development continued. The first release after the DNS portion of the project was discontinued was known internally as the \u0026ldquo;BIND10-ectomy\u0026rdquo; release, in which elements of the application framework that were deemed excessive for a single application were removed or downsized. Since then the Kea DHCP server has been in active development at ISC. Kea was originally written by Tomek Mrugalski and Marcin Siodelski, but the core team has grown since then. The principal maintainers and contributors are listed in the project\u0026rsquo;s authors file here: https://gitlab.isc.org/isc-projects/kea/-/blob/master/AUTHORS.\nPerhaps the most significant difference between Kea and ISC DHCP was the committment of the Kea developers to a modern REST management API. Kea was also designed to be modular, with separate DHCPv4, DHCPv6, and Dynamic DNS (DDNS) daemons, and separate optional libraries to extend the core DHCP server with extra features. ISC DHCP is comparatively monolithic. Unlike ISC DHCP, the Kea distribution includes only a DHCP server; there is no accompanying client or relay. ISC intends the Kea DHCP server to eventually replace ISC DHCP, which is no longer under active development.\nDS-Lite and AFTR In 2010 ISC published the \u0026ldquo;AFTR\u0026rdquo; open source, for a component of a DS-Lite architecture specified in RFC 6333. AFTR is/was a proof-of-concept implementation and is unmaintained, but the source is still available at https://downloads.isc.org/isc/aftr/. You may find our blog post interesting.\nperfdhcp perfdhcp is a DHCP benchmarking tool distributed as part of the Kea DHCP sources. perfdhcp provides a way to measure the performance of DHCP servers by generating large amounts of traffic from multiple simulated clients. It can be used to test both IPv4 and IPv6 servers, and provides statistics on response times and the number of requests that are dropped.\nBy default, tests are run using the full four-packet exchange sequence (DORA for DHCPv4, SARR for DHCPv6). There are variations on this normal traffic scenario. Another scenario simulates an overwhelming flood of traffic, with a high proportion of retries, such as might occur when re-initializing a CMTS or recovering from a power outage. When running a performance test, perfdhcp exchanges packets with the server under test as fast as possible, unless the -r parameter is used to limit the request rate. The length of the test can be limited by setting a threshold on any or all of the requests made by perfdhcp, the elapsed time, or the number of requests dropped by the server. By default, if there is no response received within one second, a response is considered lost and perfdhcp continues with other transactions.\nAnterius was an ISC Google Summer of Code project that demonstrated the opportunity to develop a lightweight management dashboard for Kea. This project, developed by Jerin John, was based on a graphical dashboard developed for ISC DHCP (GLASS, https://github.com/Akkadius/glass-isc-dhcp, by Chris Miles). Anterius is not maintained, but the original source code remains available on Github.\nStork Stork is a new (its first development release came in 2020) ISC management dashboard that provides the functionality of Anterius in a more solid, extensible web application framework. Stork is under active development at ISC in ISC\u0026rsquo;s GitLab repository: https://gitlab.isc.org/isc-projects/stork. Stork integrates with the popular Prometheus time-series database and the Grafana data visualization tool.\n","permalink":"/dhcphistory/","tags":null,"title":"The History of DHCP"},{"categories":["ISC","BIND","DNS","Webinars","Training"],"contents":"Comcast and VinylDNS VinylDNS is a vendor-agnostic open source front-end for enabling self-service DNS and streamlining DNS operations. It is designed to integrate with an existing DNS infrastructure, and provides extensibility to fit any installation. Joe Crowe, Senior DNS Engineer, and Paul Cleary, Senior Principal Software Developer from Comcast, will describe the project.\nVinylDNS manages millions of DNS records, supporting thousands of engineers in production at Comcast. The platform provides fine-grained access controls, auditing of changes, a self-service user interface, a secure RESTful API, and integration with infrastructure automation tools including Ansible and Terraform.\nThis talk will review VinylDNS and its history at Comcast, including several small demos that highlight how VinylDNS works.\nPlease join us for a FREE webinar on Wednesday, October 7 at 10 AM PDT/1 PM EDT/5 PM UTC. It will last approximately one hour, and will be recorded for later viewing; see our Presentations page for links to all our recorded webinars. And we encourage everyone to sign up for our training-info mailing list to receive notifications about all of ISC\u0026rsquo;s webinars.\nPlease click this link to register for the VinylDNS webinar: https://us02web.zoom.us/webinar/register/WN_xTx3UnnBSLaMwpY5MXWnag\nThank you for your interest!\n","permalink":"/blogs/comcast-vinyldns-webinar/","tags":["BIND","DNS","webinars","training"],"title":"Webinar - Comcast and VinylDNS"},{"categories":["BIND","DNS"],"contents":"This is a repost of a blog we originally posted in August 2019 about DNS Flag Day 2020. Our lead support engineer, Cathy Almond, recently gave a short presentation at the Virtual UKNOF conference about DNS Flag Day 2020; her remarks are here.\nThe DNS Flag Day is an initiative of DNS vendors (both open-source and proprietary) and DNS operators. Its aim is to make the Domain Name System (DNS) protocol more reliable, secure, and resilient while gradually removing workarounds for broken DNS behavior. Sometimes it takes a coordinated group effort to remove support for a broken behavior; if only one DNS server package implemented new rules on its own, users could simply use different software that still permitted the unsupported behavior.\nThe first-ever DNS Flag Day was held on February 1, 2019. It targeted removing a workaround to accommodate DNS authoritative servers that incorrectly handled the Extensions to DNS (EDNS) protocol. DNS software vendors, working together, pledged to release versions of their DNS server implementation with these workarounds removed. As a result of the DNS Flag Day 2019, we\u0026rsquo;ve seen DNS vendors and operators all around the globe finally standardize their DNS server implementations correctly.\nFor DNS Flag Day 2020, the idea is the same: make the Internet a better place through a coordinated effort across participating DNS implementers, vendors, and operators. This time, however, the target might seem not directly related to DNS: IP fragmentation. The truth is that DNS is one of the few prominent users of IP fragmentation. When DNS messages are transferred between the DNS server and a DNS client over UDP, they can exceed the Maximum Transfer Unit (MTU) on any part of the path between the two endpoints. The MTU might vary between any two interconnects; while the standard MTU of Ethernet is 1500, the unit size is effectively reduced by encapsulation into different protocols (the most basic example would be VPN). When the MTU is exceeded, the IP packet gets fragmented (split into more parts) and reassembled.\nThis IP fragmentation is considered fragile and harmful by many; there\u0026rsquo;s an IETF Draft that describes IP fragmentation and how it makes Internet communication less reliable. The situation got even more complicated with the introduction of IPv6, where the packet must be fragmented by the sender; there\u0026rsquo;s a specialized ICMP message for that, which (not so surprisingly) might get blocked by incorrectly configured firewalls. Our APNIC colleague, Geoff Huston, did measurements on IPv6 fragmentation and he also considers the IPv6 fragmentation unfixable.\nEven if we could fix all the broken networking equipment, and all the broken configurations, IP fragmentation makes certain attacks on DNS possible. As the DNS Query ID and UDP port are carried in the first IP fragment, a clever attacker might spoof the second fragment and poison the DNS cache by swapping the subsequent good IP fragments with their own. If you are interested in the topic, I would recommend you read this presentation by Fujiwara-san: Measures against cache poisoning attacks using IP fragmentation in DNS.\nDNS Flag Day 2020 is an effort to fix the IP fragmentation in DNS by making small, albeit important, changes. First, the default maximum EDNS Buffer Size will be changed to a value that would prevent IP fragmentation. The recommended value is going to be slightly smaller than the minimum IPv6 fragment size, around 1220-1232 bytes. The second change stems from the first one; when the DNS response won\u0026rsquo;t fit into a UDP packet, the default behavior of DNS is to fall back to TCP. That means that either you MUST make sure all your DNS responses fit into a ~1220-byte maximum packet size, or both the DNS client and the DNS server MUST be able to communicate via TCP.\nWhat does it mean in detail? Authoritative DNS Servers Authoritative DNS servers must be able to respond to DNS queries using the TCP protocol. Beware: even if the DNS server itself might correctly support the TCP protocol, which has been an integral part of DNS from day one, there might be a zealously configured firewall sitting in front of the DNS server blocking the TCP communication over port 53. Next, the maximum accepted EDNS buffer size will be set to ~1220 bytes; the authoritative DNS server MUST honor the requested EDNS buffer size and never send a DNS response larger than the requested size.\nBIND 9 is already compliant with the TCP and honoring the EDNS buffer size requirements, and you can already configure your BIND 9 server to never send DNS responses larger than ~1220 bytes, by adding max-udp-size 1220; to the options {}; section of named.conf:\noptions { max-udp-size 1220; }; Recursive DNS Servers Recursive DNS servers must honor all the same requirements as authoritative DNS servers, with the extra requirement that they must never advertise an EDNS buffer size larger than ~1220 bytes. They must also be ready to fall back to requery using TCP, if a truncated DNS response is received.\nAgain, BIND 9 is already compliant with the hard requirements; you can test changing the maximum advertised EDNS buffer size by setting edns-udp-size 1220; in the options {}; section of named.conf:\noptions { edns-udp-size 1220; } Other Changes in BIND 9 Currently, BIND 9 tries very hard to guess the maximum allowed EDNS buffer size that will be accepted by the DNS server on the other side. We will not, for the time being, remove the code that makes this possible, and we will not limit the maximum EDNS buffer size that a BIND 9 user can configure. We may add a warning when the user configures the EDNS buffer size beyond the limit proposed by the EDNS Flag Day 2020.\nThe only end-user visible change will be the change of the default configuration for the edns-udp-size and max-udp-size configuration options.\n","permalink":"/blogs/dns-flag-day-2020-2/","tags":["DNS Flag Day","DNS Flag Day 2020","BIND"],"title":"DNS Flag Day 2020"},{"categories":["Stork","New Releases","Kea"],"contents":"Stork 0.11.0, September 2nd, 2020 ISC is delighted to announce the release of Stork 0.11. This is a development release of the Stork dashboard for Kea. Note that we recently released a new version of Kea, Kea 1.6.3 which adds the api call required for monitoring with Stork, so you can now use Stork with Kea 1.6. Stork also works with Kea 1.7.x versions.\nThis is somewhat smaller release than usual. Some team members enjoyed a well-earned vacation this month, and those that were in the office focused on pushing a major release of Kea 1.8.0 out the door. With the summer holiday period coming to a close and the full team back in the office, we expect the next Stork release to be a bit bigger.\nThe notable new features, bug fixes and changes introduced in this version are:\nSecurity improvements. For the last couple of months, Stork has been adding many new features in each release. This time, we are focusing on security improvements. Many dependencies have been updated to their latest versions without any known vulnerabilities (#362), the credentials are now passed in a more secure way that is not susceptible to data leakage in proxy logs (#375), and the sanity checks for the log viewer capability are now vastly improved (#348).\nUI improvements. The status of BIND 9 daemons is now displayed correctly (#379), and the tables are displayed better on low-resolution (720p) devices (#295). Also, the dashboard no longer shows daemons that are not monitored (#365).\nStork 0.11 Services Display Documentation. The section explaining how to use the Stork agent as a Prometheus exporter for BIND 9 was expanded (#382).\nBug fixes. Stork no longer gets confused when the database access password uses uppercase letters (#361), and the daemon status is now presented correctly after recovering from a connection breakdown (#384).\nStork Environment Simulator. Running Stork demo is a popular way to get your first experience with Stork. There are two separate pages, DHCP Traffic Generator and DNS Traffic Generator. Those two tools were merged and evolved into the Stork Environment Simulator. You can add all of the predefined demo machines to your demo Stork server with a single click. This tool was extended in this release to add the ability to simulate Kea failures by stopping and starting services. This is useful for experimenting with the monitoring of Kea High Availability pairs. (#380).\nStork 0.11 Environment Simulator can simulate Kea server failures Known issues License This version of Stork is released under the Mozilla Public License, version 2.0.\nDownload The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from our open source package repository on Cloudsmith.\nThe Stork source and PGP signature for this release may be downloaded from: https://downloads.isc.org/isc/stork The signature was generated with the ISC code signing key.\nISC provides documentation in the Stork Administrator Reference Manual. It is available on ReadTheDocs.io, and in source form in the docs/ directory. We ask users of this software to please let us know how it worked for you and what operating system you tested on. Feel free to share your feedback on the stork-users mailing list. We would also like to hear whether the documentation is adequate and accurate. Please open tickets in the Stork GitLab project for bugs, documentation omissions and errors, and enhancement requests. We want to hear from you even if everything worked.\nSupport Professional support for Stork will become available once it reaches the 1.0 milestone. Existing ISC customers that consider themselves very early adopters may get involved in the development process, including roadmap, features planning, and early testing, but the software maturity level does not constitute a typical professional service before the 1.0 milestone. Free best-effort support is provided by our user community via the user mailing list. Information on all ISC public email lists is available. If you have any comments or questions about working with Stork, please share them to the stork-users list. Bugs and feature requests may be submitted via GitLab.\nChanges The following summarizes changes and important upgrades since the Stork 0.10.0 release. This text references issue numbers. For more details, visit the Stork GitLab page.\n* 101 [func] godfryd Merged Stork DHCP Traffic Simulator and Stork DNS Traffic Simulator into one web application called Stork Environment Simulator. Added there capabilities for adding all present machines in demo setup and ability to stop and start Stork Agents, Kea and BIND 9 daemons. This allows simulation of communication issues between applications, Stork Agents and Stork Server. (Gitlab #380) * 101 [func] marcin Restrict log viewer\u0026#39;s access to the remote files. The log viewer can only access log files belonging to the monitored application. (Gitlab #348) * 100 [func] godfryd Improved user experience of updating machine address/port. Improved visual aspects. Added refreshing state from the machine after changing the address. (Gitlab #283) * 99 [func] godfryd The DHCP dashboard now is presenting only monitored daemons. The daemons that have monitoring switched off are not visible in the dashboard. (Gitlab #365) * 98 [bug] marcin Corrected an issue causing false errors about broken communication with the monitored Kea application after the application was brought back online. (Gitlab #384) * 97 [bug] godfryd Improved layout of various tables that they are displayed correctly on smaller screens. Fixed address of the machine that is displayed in the tables (previous it was always showing 127.0.0.1). (Gitlab #295) * 96 [doc] matthijs Add documentation on monitoring the BIND 9 application. (Gitlab #382) * 95 [func] godfryd Fixed presenting an application status on a machine tab with BIND 9 application. Previously it was always red/inactive. Now it is presented the same way as it is for Kea app: status per each daemon of an app. (Gitlab #379) * 94 [bug] marcin Fixed an issue whereby the user was unable to login to Stork when database password conatined upper case letters. In addition, passwords with spaces and quotes are now also supported. (Gitlab #361) * 93 [func] marcin Login credentials are passed in the message body rather than as query parameters. In addition, the user information is obfuscated when db tracing is enabled. (Gitlab #375) Thank you again to everyone who assisted us in making this release possible.\n","permalink":"/blogs/stork-0-11-released/","tags":["Stork","New Releases","Kea"],"title":"Stork 0.11 Released"},{"categories":["ISC","BIND","DNS","Webinars","Training"],"contents":"Encrypted DNS - where are we now? As part of ISC\u0026rsquo;s ongoing mission to keep the Internet open and accessible to all, we are happy to present another webinar on the status of encrypted DNS.\nThe Domain Name System (DNS) is one of the basic protocols of the Internet - almost no application can work without DNS name resolution. Unfortunately, the classic DNS protocol was never designed to protect end-user privacy.\nAfter a few years of relative calm, the IETF has intensified its work on privacy in the DNS, and new DNS protocols have been standardized: DoT (DNS over TLS) and DoH (DNS over HTTPS).\nWith the initial euphoria about the prospect of encrypted DNS over, disillusionment has been spreading in the IT community - the first implementations of DoH in major web browsers cause concerns that privacy data will be collected centrally at US cloud providers, a step backwards for user privacy.\nBut the story is not over yet. New protagonists Apple and Microsoft have joined the fray with new and interesting ideas about DNS encryption.\nPlease join us for a FREE webinar on Wednesday, September 23 at 10 AM PDT/1 PM EDT/5 PM UTC. It will last approximately one hour, and will be recorded for later viewing.\nIn this talk we will:\nreview the reasons for concern about DNS privacy explain the new protocols DoT and DoH present the current (as of September 2020) status of DoH/DoT deployment look at new work from the IETF Adaptive DNS Discover (ADD) working group: Oblivious DoH (oDoH) and Adaptive DNS resolver discovery provide an update on the latest advances from Apple and Microsoft Please click this link to register: https://us02web.zoom.us/webinar/register/WN_ef5x6HtlQsqGs1T_7QZ6MA.\nAnd please subscribe to our training-info mailing list to be notified of other upcoming ISC webinars! Thank you for your interest.\n","permalink":"/blogs/encrypted-dns-september-2020/","tags":["BIND","DNS","webinars","training"],"title":"Webinar - Update on Encrypted DNS (Sept 2020)"},{"categories":["ISC","Kea","Webinars","Training"],"contents":"ISC is delighted to announce its next series of webinars: Using the Kea DHCP Server. This series of six bi-weekly webinars will enable a LINUX or UNIX system administrator to successfully deploy the open source Kea DHCP server. We will begin with a review of DHC protocol basics. Attendees will learn how to install, configure, and monitor a Kea DHCP server to assign dynamic and static IP addresses and IPv6 prefixes, and configure appropriate client options such as the default gateway and DNS server.\nRegister once for the entire series: all sessions will be recorded and the recordings posted within a few days, so you can easily review any sessions you miss. Each session will last approximately one hour, including Q\u0026amp;A as time permits.\nSchedule September 16: Session 1. DHCP fundamentals, DHCPv4 vs DHCPv6 differences September 30: Session 2. Kea installation and configuration October 14: Session 3. Kea lease allocation, client classification, and option assignment October 28: Session 4. High Availability and DB Backends Please note there is an extra week between sessions 4 and 5. There is also a Daylight Savings Time disconnect between the US and Europe; please check your registration from Zoom for the correct time in your timezone.\nNovember 18: Session 5. Monitoring, Logging, and Stork December 2: Session 6. Migrating to Kea from ISC DHCP Please click this link to register: https://us02web.zoom.us/webinar/register/WN_WvQmqh-wQ1CnKr_CDYhQ8w . And please subscribe to our training-info mailing list to be notified of other upcoming ISC webinars!\nWe look forward to seeing you online.\n","permalink":"/blogs/kea-webinar-series-2020/","tags":["Kea","webinars","training"],"title":"Webinar Series - Using the Kea DHCP Server"},{"categories":["New Releases","Kea"],"contents":"ISC is delighted to announce the release of Kea 1.8.0.\nKea 1.8.0 - New Stable Version Kea 1.8.0 is our newest stable version. The following major features and bugfixes were implemented since the 1.6 series:\nMulti-threading. Kea is now able to take advantage of multiple CPU cores and process many packets in parallel. In many cases, this improves overall performance. We have observed a substantial increase in LPS (Leases Per Second) with multi-threading vs the single-threaded mode in configurations without High Availability. Many factors contribute to overall performance, including enable-multi-threading, thread-pool-size, packet-queue-size, what kind of backend is used, whether the database is local or remote, and some of these can limit the performance improvement with multi-threading. There are new sections in the Kea ARM dedicated to enabling and tuning multi-threaded performance.\nStork support. Stork, our new open source management application, monitors many aspects of Kea and presents its status in an easy-to-use graphical interface. Stork can also export statistics to Prometheus and Grafana. Stork relies on the status-get API command, available in this version of Kea. You can learn more about Stork here.\nHA improvements. Many improvements have been made in Kea\u0026rsquo;s High Availability feature. A new mode of operation passive-backup has been implemented. In this mode there is one primary server and an arbitrary number of backup servers that get updates from the primary. It is different than other HA modes in the sense that there is no dedicated secondary server. The updates to backup servers are now asynchronous, which greatly reduces the performance penalty for adding new backup servers. Also, a set of new commands to manipulate HA status (ha-maintenance-start, ha-maintenance-notify and ha-maintenance-cancel) are now available. They are useful to perform a controlled shutdown with faster recovery. Finally, the HTTP connection recovery has been improved. The server pairs are now able to reestablish connection more reliably.\nFine grained DNS Updates. Earlier Kea versions supported global DNS update parameters. Kea 1.8 makes it possible to specify many DNS update parameters at the global, shared network and subnet levels. In particular, it is possible to specify whether to send the updates or not (ddns-send-updates), control whether to sanity-check the client update (ddns-replace-client-name, hostname-char-set, hostname-char-replacement), replace (ddns-generated-prefix) or expand the update (ddns-qualifying-suffix) on a per shared network or subnet basis. This provides more flexibility over DNS updating.\nNew API commands. Many new API commands have been implemented. There are new commands to retrieve leases using certain patterns (lease4-get-by-client-id, lease4-get-by-hostname, lease4-get-by-hw-address, lease6-get-by-duid, lease6-get-by-hostname). Other new commands instruct Kea to retrieve new configuration from the Config Backend immediately rather than waiting for a timer to expire (config-backend-pull), or to trigger DNS updates (lease4-resend-ddns, lease6-resend-ddns).\nNew open source hook. flex_option. This new hook enables you to add, supersede or remove an option based upon on the result of an expression. The option raw value can be determined by an expression.\nNew subscriber-only hook. Leasequery. This library provides support for DHCPv4 Leasequery as described in RFC 4388, and for DHCPv6 Lease Query (RFC 5007). This feature is primarily used by hardware relays (Cisco in particular) to update their state after a reboot, but can be used by any device or software that is capable of using the Leasequery protocol.\nNew open source hook. BOOTP. This legacy feature was added using a hook module so that it can easily be omitted from more modern deployments. Surprisingly enough, there are still many legacy devices that require BOOTP.\nBugfixes. There are 41 tickets marked as bugs that were fixed since the 1.6.0 release. Even more small improvements have been implemented. For a complete list, please see the Changelog below. Two most notable bug fixes provide a more robust database connection recovery and improve handling of broken configurations.\nDocumentation and test reports. There were 19 documentation updates that were deemed big enough to be listed in the Changelog. In addition to that, we now publish our test results (see our https://reports.kea.isc.org page). This is especially important in the context of tuning multi-threaded Kea deployments.\nKea Migration Assistant (keama) packages. Keama is a tool that reads ISC DHCP\u0026rsquo;s configuration file and produces a Kea configuration file based on it. ISC DHCP and Kea are very different, so a complete conversion is impossible, but this provides a starting place for people who are migrating. It is now much easier to try it, as Keama packages are now available at https://cloudsmith.io/~isc/repos/keama/packages/. You can learn more about Keama from our KB article.\nPackages: We have produced and published on Cloudsmith packages for Kea 1.8 for CentOS 7 \u0026amp; 8, Fedora 31 \u0026amp; 32, and also Alpine 3.10 and 3.11. We have also updated our restricted-access repository for subscribers with new Kea 1.8.0 packages including the Premium and Subscriber-only hooks.\nKea References Source Code The main source code repository for Stork is in ISC\u0026rsquo;s GitLab: https://gitlab.isc.org/isc-projects/kea.\nThe Kea source code tarballs are available on the ISC downloads page as well as on the ISC FTP server. ISC is also providing RPM and deb binary packages for Stork. They are available in our package repositories on Cloudsmith.\nIssues \u0026amp; Mailing List We highly recommend the kea-users mailing list. We encourage users to test Kea and report back their findings on the mailing list or on our GitLab page.\nDocumentation The Kea Administrator Reference Manual is posted on Readthedocs.io.\nRelease Model \u0026amp; License Kea has monthly development releases and approximately annually, we release a new stable branch. The stable version is then updated as needed, when we have significant maintenance updates.\nKea is released under the Mozilla Public License, version 2.0: https://www.mozilla.org/en-US/MPL/2.0\nDownload \u0026amp; Installation The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/\nSupport We support development of Kea, Stork and our other software through the sale of annual support subscriptions. Contact our sales team discuss your BIND and Kea support requirements. Contact us for more information.\n","permalink":"/blogs/kea-1.8.0-released/","tags":["DHCP","New Releases","Kea"],"title":"Kea 1.8.0 Released"},{"categories":["Stork","New Releases","Kea"],"contents":"ISC is delighted to announce the release of Stork 0.10. This is a development release of the Stork dashboard for Kea. Note that we recently released a new version of Kea, Kea 1.6.3 which adds the api call required for monitoring with Stork, so you can now use Stork with Kea 1.6. Stork also works with Kea 1.7.x versions.\nThe notable new features, bug fixes and changes introduced in this version are:\n1. BIND 9 Exporter enhancements. The Stork Agent can now export additional statistics from a BIND 9 resolver to the Prometheus time-series datastore: queries by duration, protocol used (tcp/udp) and packet sizes. Note that use of the Stork Agent for this purpose does not require the Stork server, so it can be a standalone exporter for Prometheus. #63 (closed)\n2. Log viewer for Kea. Stork is now able to retrieve a list of log files used by Kea and display the most recent entries in them. #342 (closed), #344 (closed), #359 (closed), #349 (closed), #364 (closed)\nSimple log viewer 3. Responses Per Second for Kea. Stork now shows Responses Per Second averaged over the short term (last 15 minutes) and the long term (last 24 hours). This statistic is roughly equivalent to LPS (Leases Per Second), but it has the additional benefit of recording renewal activity and also showing responses if the server ran out of available leases. #252 (closed), #363 (closed)\nResponses Per Second statistic 4. Stop Monitoring for Kea. Kea\u0026rsquo;s Control Agent comes preconfigured with sockets for DHCPv4, DHCPv6 and DDNS. However, most deployments don\u0026rsquo;t deploy all three daemons. Stork can\u0026rsquo;t determine whether missing daemons died or were never deployed. Previous versions of Stork flagged any missing daemons as a fault. There is now a button that toggles monitoring start/stop, which is useful to squelch monitoring for daemons not deployed. #324 (closed), #339 (closed)\nNew control to stop monitoring a server 5. Events for communication failures. The events subsystem has been expanded to record events in case of communication failure and reestablishment. This covers communication breakdown between the CA and Kea daemons, between the Stork Agent and the CA and between the Stork Server and Stork Agent. #324 (closed). Note that in this release, the events are displayed on the web dashboard only. We are investigating for our next release, adding alerting via some open source alerting system, possibly leveraging existing features of Grafana or Prometheus to send alerts to email or pagerduty or other channels.\n6. UI improvements. The user interface evolution continues. The DHCP dashboard has a dedicated link in the DHCP menu (#280 (closed)), although you still navigate to the dashboard by clicking on the Stork logo. The machines page has additional tooltips and the style of some tables has been updated to improve readability #112 (closed), #293 (closed).\n7. Bugfixes. A crash when empty requests were sent to ReST API endpoints for users and machines was fixed #310 (closed), #311 (closed), #312 (closed). A problem with RPM upgrades was addressed. The upgrade from 0.10.0 and beyond should work. However, the upgrade from older broken packages can\u0026rsquo;t be fully fixed as the issue is in their scripts. When upgrading from older (0.9.0 and earlier) packages, it is recommended to uninstall the old package first #334 (closed). Stork should now work on RHEL and other systems running in FIPS mode. We now use the more secure blowfish algorithm instead of the weaker md5 hash for storing Stork passwords #356 (closed). Please see this link for known issues: for https://gitlab.isc.org/isc-projects/stork/-/wikis/Known-issues\nRelease Model Stork has monthly development releases on the first Wednesday of each month (with some exceptions around holidays), although we sometimes delay it a bit for last-minute changes. We encourage users to test the development releases and report back their findings on the stork-users mailing list, available at https://lists.isc.org/mailman/listinfo/stork-users. This text references issue numbers. For more details, visit the Stork GitLab page at https://gitlab.isc.org/isc-projects/stork/issues.\nLicense This version of Stork is released under the Mozilla Public License, version 2.0. https://www.mozilla.org/en-US/MPL/2.0\nDownload The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from: https://cloudsmith.io/~isc/repos/stork/ The Stork source and PGP signature for this release may be downloaded from: https://downloads.isc.org/isc/stork The signature was generated with the ISC code signing key which is available at: https://www.isc.org/pgpkey ISC provides documentation in the Stork Administrator Reference Manual. It is available on ReadTheDocs.io at https://readthedocs.org/projects/stork/, and in source form in the docs/ directory. We ask users of this software to please let us know how it worked for you and what operating system you tested on. Feel free to share your feedback on the stork-users mailing list (https://lists.isc.org/mailman/listinfo/stork-users). We would also like to hear whether the documentation is adequate and accurate. Please open tickets in the Stork GitLab project for bugs, documentation omissions and errors, and enhancement requests. We want to hear from you even if everything worked.\nSupport Professional support for Stork will become available once it reaches the 1.0 milestone. Existing ISC customers that consider themselves very early adopters may get involved in the development process, including roadmap, features planning, and early testing, but the software maturity level does not constitute a typical professional service before the 1.0 milestone. Free best-effort support is provided by our user community via a mailing list. Information on all public email lists is available at https://www.isc.org/mailinglists/. If you have any comments or questions about working with Stork, please share them to the stork-users list (https://lists.isc.org/mailman/listinfo/stork-users). Bugs and feature requests may be submitted via GitLab at https://gitlab.isc.org/isc-projects/stork/issues.\nChanges The following summarizes changes and important upgrades since the Stork 0.9.0 release.\n92 [func] godfryd\nImproved presenting application status on machines page. Now, instead of summary app status, there are presented statuses for each daemon of given application. (Gitlab #297, #282)\n91 [doc] tomek\nUpdate man pages and installation instructions. (Gitlab #202, #266, #307)\n90 [ui] tomek\nClarified machines page, added tooltips. Updated color scheme to improve readability of wide tables. (Gitlab #112, #293)\n90 [bug] marcin\nFixed an issue with refreshing log displayed within the log viewer. The issue was triggered by the periodic updates of the information about monitored apps. As a result of the updates the log file identifiers were changing which resulted in an error message informing that the viewed file no longer exists. (Gitlab #364)\n89 [func] godfryd\nChanged md5 to blowfish as algorithm in hash function used to store password in PostgreSQL database. (Gitlab #356)\n88 [bug] godfryd\nFixed upgrading RPM agent and server packages. There was a problem of re-adding stork-agent and stork-server users that already exist in case of upgrade. (Gitlab #334)\n87 [doc] marcin\nDescribed Kea log viewer in the ARM. (Gitlab #349)\n86 [func] tmark\nAdded tool tip to RPS columns on DHCP dashboard. (Gitlab #363)\n85 [bug] marcin\nFixed regression in the log viewer functionality which removed links to the log files on the Kea app pages. In addition, improved error message presentation on the log viewer pages. (Gitlab #359)\n84 [func] godfryd\nAdded stop/start monitoring button to better control which services are monitored. Communication failures now generate events that are recorded in the events system. Machine view now shows events. (Gitlab #324, #339)\n83 [func] tmark\nAdded RPS (Response Per Second) statistics to DHCP Dashboard (Gitlab #252)\n82 [func] marcin\nViewing the tail of the remote log files is enabled in the UI. (Gitlab #344)\n81 [func] matthijs\nAdd more query details to BIND 9 exporter and Grafana dashboard: queries by duration, which transport protocol is used, packet sizes. (Gitlab #63)\n80 [func] marcin\nList of loggers used by Kea server is fetched and displayed in the Kea application tab. (Gitlab #342)\n79 [ui] vicky, tomek, marcin\nAdded explicit link to DHCP dashboard. (Gitlab #280)\n78 [bug] godfryd\nFixed crashes when empty requests were sent to ReST API endpoints for users and machines. (Gitlab #310, #311, #312)\nThank you again to everyone who assisted us in making this release possible.\n","permalink":"/blogs/stork-0-10-released/","tags":["Stork","New Releases","Kea"],"title":"Stork 0.10 Released"},{"categories":["New Releases","Kea"],"contents":"ISC is delighted to announce the release of Kea 1.6.3.\nKea 1.6.3 - Current Stable Version Kea 1.6.3 is is our current stable version. We normally avoid making significant changes in a stable version, but in this release we have backported three very useful features from our development branch. There is no need to update to this version unless you are interested in one of these new features. The new features in this version are:\n1. status-get call implemented, enabling monitoring with Stork Kea 1.6.3 now supports a status-get command that returns a detailed current status, giving insight into the current operation of any Kea servers. In particular, this command is heavily used by Stork, ISC\u0026rsquo;s new dashboard system. With the addition of that command, it is possible to use Kea 1.6.3 with Stork.\nFor a quick introduction to what Stork offers, look at the screen shot library. Stork is open source and can be downloaded from the ISC website where you download Kea, or there are packages available on Cloudsmith.\tWe are currently doing monthly development releases of Stork, so the functionality is evolving rapidly.\n2. Swap the order of Host Reservation and Lease checks for conflicts During the process of assigning a dynamic address, Kea will cycle through all the addresses in the selected pool in order, checking each one to see if another client has already reserved it, and then checking to see if another client has already leased it. When it finds an address that is neither reserved nor already assigned, it can proceed to allocate it to the new client. With this change, the order of these two checks is reversed.\nKea 1.6.2 pickAddress\ncheck address for HR\ncheck address for existing lease\nKea 1.6.3 pickAddress\ncheck address for existing lease\ncheck address for HR\nPrior to this change, every possible address was checked for an existing host reservation. Checking for host reservations is an expensive operation, particularly if host reservations are stored in a separate database. In a pool where most of the addresses are already assigned, it is more efficient to first check for an existing lease, and then only check for a host reservation if there is no existing lease.\nThe purpose of this change was to improve efficiency. In our internal testing at ISC, we did confirm that with a pool with high utilization this change lowered the CPU utilization and shortened the time to identify an available address for the client. This should also improve performance in cases where lease allocation is significantly faster than reservations, such as when using memfile for leases and a SQL database for reservations. This change was implemented for both DHCPv4 and DHCPv6. (Details on Gitlab #1124)\n3. Client Classes specified in global host reservations are assessed when selecting a subnet It is possible to specify per-client client classes within a host reservation. These classes, however, were previously evaluated only after an address had been selected for the client, so classification could only influence which DHCP options were given to the client. However, we found some use cases where it was helpful to assign client classes on a global host reservation and use those to influence subnet selection within a shared network and pool selection within a subnet.\nNow, if there are client classes found in the global reservations or there are client classes specified within the non-global reservations and the subnet belongs to a shared network, we remove already evaluated classes (except for built-in classes and those set by the hooks libraries) and evaluate them again taking into account those that are found in the reservations.\nTo read an excellent issue report from user Mike Kazantsev see Gitlab #1324. Thank you Mike!\nClient classes specified within host reservations can now be used to influence subnet selection within a shared network and pool selection within a subnet. (Details on Gitlab #1139)\nPackages: We have produced and published on Cloudsmith packages for Kea 1.6.3 for Ubuntu 18.04, Debian 9, CentOS 7. Fedora 30 is now obsolete, so we will not be publishing a package for Fedora 30. There are newer versions of Ubuntu, Debian, Fedora and CentOS, that were not available when we started the Kea 1.6 branch. We have not created packages for these, although if we get significant user demand for them we could. We have made a number of improvements in our packaging scripts for the 1.7 branch, fixing issues with paths and permissions. We don\u0026rsquo;t plan to backport these changes to 1.6 because we expect this is the last 1.6 release we will be making, since 1.8 is only a month or two away from release. So, the upshot is, if you have been using the 1.6.x packages previously, this should be a very simple update. If you have not started using Kea packages, you might be better off building this from source as well, and adopting the Kea packages with a 1.7 or later version, because the package scripts will change after 1.6.\nKea 1.7.10 - Latest Development Version Kea 1.7.10 is our latest monthly development version. We are hoping this will be the last release on this development branch before we renumber it and produce our next stable version, Kea 1.8.0. If you are curious about the status of Kea 1.8.0, you can view the milestone and all the remaining work planned for version 1.8 in our Kea Gitlab project. Those users who are closely following the progress of our work on multithreading Kea might be interested in our latest performance test report, which now shows performance improvements in every scenario.\nAs with any other development release, use this with caution. Development releases are not recommended for production use.\nThe most significant changes introduced in this version are:\nNew Packages for Kea Migration Assistant (Keama). Keama is a tool that assists migration from ISC DHCP to Kea by using an ISC DHCP config file to generate a JSON config file in a format accepted by Kea. The configuration syntax and some core architectural designs are different, so the conversion does not cover 100% of cases, but at least it gives users a running start. Previously, to get the tool, a manual compilation was necessary, but now it is available as an RPM or DEB package (#1081). Packages are available in ISC\u0026rsquo;s Cloudsmith repository.\nRemoved obsolete configuration of top-level logging and other daemons. Very early Kea versions assumed that configuration for all daemons (DHCPv4, DHCPv6, DDNS, CA) were to be stored in the same file, sharing a top-level logging entry. This was convenient in the early days, but it was fundamentally incompatible when commands such as config-set and config-write were introduced, as one daemon would overwrite other daemons\u0026rsquo; configurations. This was deprecated a long time ago, and since 1.6.0, Kea produced warnings when an old configuration was detected. This backward compatibility is now removed and Kea now prints an error if old configurations are detected (#916).\nImproved performance of subnet selection. One of the crucial steps of packet processing in Kea is subnet selection, as Kea needs to determine which subnet to use to allocate leases. Prior to this change, Kea walked through subnets in the order configured, which was fine for a smaller number of subnets, but for large deployments that counted subnets in thousands this was inefficient. The data structure is now a multi-index container, which should help with large deployments. The side effect is that subnets are now evaluated in the order of their subnet-id. This generally should not affect any users, except in cases where people relied on the earlier evaluation order (as specified in the config file) (#553, #1265). This feature was also included in the Kea 1.6.3 version.\nMultithreading tuning. Two new sections have been added in the ARM regarding DHCPv4 and DHCPv6 performance tuning for multithreading. We also added a link to our performance report that is updated weekly (#1285). Our Developer\u0026rsquo;s Guide has been updated as well (#996, #1264).\nBetter recovery after failed config. Kea\u0026rsquo;s configuration process is complex and can fail in various ways. Depending on the type of failure (e.g. syntax error, removal of a dynamic network interface, database connection failure, etc.), Kea may not be able to recover to its working state. Several improvements have been made: Kea now handles the interface misconfiguration better (#1330); when recovery is not possible, Kea clearly reports a critical message that it is unable to provide service; and message documentation now has a better explanation of the available steps needed to recover the service (#1166, #507).\nDNS support for lease4-del and lease6-del commands. Kea previously provided lease4-del and lease6-del commands. However, when removing leases, there was no way to instruct Kea to clean up the corresponding DNS entries. It is now possible (#544).\nCross-compilation for Raspberry Pi. We are working on a solution to be able to efficiently run Kea on Pis. This is a long process, but we\u0026rsquo;re getting there, step by step. In this release we improved the makefiles and added a section in the Developer\u0026rsquo;s Guide regarding cross-compilation (#1221). We also updated the parameters to the ./configure script to better support cross-compilation (#1223).\nBetter handling for a large number of interfaces. Believe it or not, there are users who need more than 64k interfaces in their system. Kea previously failed on interfaces with ifindex greater than 65535. It does not anymore (#1302).\nLFC process now runs with lower priority. One user was concerned with high CPU utilization when Lease File Cleanup was running. The LFC process is now being started with lower priority, meaning the OS will deprioritize it. The amount of work needed has not changed, but the load should be spread over a longer period of time, making it less noticeable (#1311).\nNegative statistics and getNextRow failed in MySQL The underlying problem causing some statistics to become negative has been fixed in the 1.7.9 release. However, there was no good way to fix the already-broken statistic after migration to 1.7.9. The kea-admin tool now provides the stats-recount command to correct the statistics (#1196).\nMoved to standard chrono library in C++11. Previously Kea used the POSIX time library from boost. The code now uses the standard chrono library from C++11. In addition to the benefit of using a standard library, there\u0026rsquo;s a small related performance improvement as the standard lib does fewer conversions (#1174).\nAssorted bugfixes and improvements GNU make-specific features were removed, which should improve compatibility with FreeBSD and possibly other BSD clones (#1248). We fixed a bug in leasequery that could cause some unit-tests to fail (#1344), and fixed a compilation problem with the musl library that affected Apline 3.10 builds (#1337). We fixed a test problem in the stats library that caused the tests to fail if run in a specific order (#1293). The config backend has better time granularity; previously, it could have missed configuration updates if multiple updates were done during the same second (#1247). And we improved the hook libraries unloading routine that previously could have caused a crash on shutdown in some rare cases (#1282, #1279).\nCode of conduct. Kea now has a code of conduct adapted from the Django project. Please see the code_of_conduct.md file or use this link.\nKnown Issues Our internal testing is aware of only one outstanding issue, somewhat related to MT and HA: in some specific circumstances (notably, when Kea has received a packet and is still processing it before sending a response while the application is being shut down), the application may fail during shutdown. The underlying problem is related to improper hooks library de-initialization and has existed for a long time; it is not strictly related to MT. The issue is very rarely reproducible in our tests. However, due to the parallel nature of MT, the likelihood of triggering the problem, especially under heavy load, is greater. Details are in ticket #1279. This problem is expected to be fixed before the next stable (1.8.0) version is released.\nKea References Source Code The main source code repository for Stork is in ISC\u0026rsquo;s GitLab: https://gitlab.isc.org/isc-projects/kea.\nThe Kea source code tarballs are available on the ISC downloads page as well as on the ISC FTP server. ISC is also providing RPM and deb binary packages for Stork. They are available in our package repositories on Cloudsmith.\nIssues \u0026amp; Mailing List We highly recommend the kea-users mailing list. We encourage users to test Kea and report back their findings on the mailing list or on our GitLab page.\nDocumentation The Kea Administrator Reference Manual is posted on Readthedocs.io.\nRelease Model \u0026amp; License Kea has monthly development releases and approximately annually, we release a new stable branch. The stable version is then updated as needed, when we have significant maintenance updates.\nKea is released under the Mozilla Public License, version 2.0: https://www.mozilla.org/en-US/MPL/2.0\nDownload \u0026amp; Installation The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/\nSupport We support development of Kea, Stork and our other software through the sale of annual support subscriptions. Contact our sales team discuss your BIND and Kea support requirements. Contact us for more information.\n","permalink":"/blogs/kea-1.6.3-released/","tags":["DHCP","New Releases","Kea"],"title":"Kea 1.6.3 and 1.7.10 Released"},{"categories":["BIND","Open Source","Webinars","Events"],"contents":"Graham Hayes (Core Developer for Designate, the DNS component of OpenStack) and Stephan Lagerholm (OpenStack user and DNS enthusiast) will give an overview of the DNS support in OpenStack. They will explain how the DNS service is used in OpenStack, what a typical application looks like, and how managing DNS in OpenStack compares to managing DNS in a traditional network environment.\nThis presentation is aimed at DNS system administrators familiar with traditional DNS deployments who are interested in learning how to transfer those skills to OpenStack deployments.\nOpenStack Block Diagram The slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all BIND 9 administrators and any other interested users to register for the July 22 webinar at 1700 UTC/1 PM EDT/10 AM PDT, at https://us02web.zoom.us/webinar/register/WN_jUdpWaptSlauxAEKnWbdXQ.\n","permalink":"/blogs/dns-designate-webinar-july-2020/","tags":["BIND","DNS","Webinars"],"title":"Designate - the DNS service in OpenStack - July 22, 2020"},{"categories":["BIND","Kea"],"contents":"As a service for our users and customers, several months ago ISC began offering packaged binary versions of our BIND 9 and Kea DHCP software, via Cloudsmith. We encourage anyone looking for binaries of our software to visit this repository. (Of course, the full source code tarballs are still provided for every release, on our Downloads page.)\nThe repository for ISC\u0026rsquo;s open source packages is provided by Cloudsmith at no charge as a free community service for non-profit open source projects.\nWhy does ISC provide BIND 9 and Kea DHCP packages? We want to make sure that our users and customers have access to binaries, for two reasons:\nWe want to make our users\u0026rsquo; lives easier by offering an alternative to building our software from sources; Our users should not have to wait for OS distributions that package Kea to include all the latest bug fixes. We provide binary packages along with sources at the time of release. For more information about our BIND 9 packages, please read this article in our KB: ISC Packages for BIND 9.\nFor more information about our Kea DHCP packages, please read this article in our KB: ISC Packages for Kea DHCP.\nSome of the software in our repositories is private and only available with a paid ISC support subscription. For more information on how to get access to those additional features, please contact our sales team. Thank you!\n","permalink":"/blogs/cloudsmith-repos/","tags":["BIND","Kea","cloudsmith","repositories","repos","binaries","packages"],"title":"Cloudsmith Repositories for BIND 9 and Kea DHCP"},{"categories":["Stork","New Releases","Kea"],"contents":"ISC is delighted to announce the release of Stork 0.8. The features provided in this version are:\n1. BIND 9 Prometheus Exporter The Stork agent now supports native statistics export from BIND 9 to Prometheus. This is the first time ISC has published a Prometheus exporter for BIND, although there are several open-source exporters available. This exporter does require installing the Stork agent, but it does not have any related dependency to run Stork itself. Users can use this exporter with Prometheus, with or without using the Stork dashboard. (At the moment the Stork dashboard does not provide any significant benefit for a BIND user.) The benefit of using the Stork agent to provide BIND statistics is, it will also provide statistics on the host computer. When troubleshooting some problems, such as performance problems in particular, it can be essential to see the platform resource utilization and the application data together. Please do let us know which statistics you would like to see in future Stork versions for BIND monitoring.\n2. Events Log A new subsystem has been implemented that collects and distributes events and event notifications. Stork is able to detect and report a number of events, such as detection of new service or a new subnet. It is not possible to subscribe to specific events yet; the main dashboard shows all events. We will be adding more events to the events log: this is the initial infrastructure for events.\nStork Events Log 3. Improvements to the High Availability monitoring Stork now shows the last time one of your servers in HA mode failed, if such an event was recorded by Stork.\nStork High Availability detail 4. Improvements to Host Reservations. Stork is now able to show hostnames in host reservations, if specified.\nThanks again to everyone who assisted us in making this release possible.\nWe look forward to receiving your feedback.\nStork References Source Code The main source code repository for Stork is in ISC\u0026rsquo;s GitLab: https://gitlab.isc.org/isc-projects/stork. It is mirrored on github.\nThe Stork 0.8.0 source code tarball is available on the ISC downloads page as well as on the ISC FTP server. ISC is also providing RPM and deb binary packages for Stork. They are available in our package repositories on Cloudsmith.\nIssues \u0026amp; Mailing List We have set up a stork-users mailing list. We encourage users to test Stork and report back their findings on the mailing list. Issues can also be reported on our GitLab page.\nDocumentation The Stork Administrator Reference Manual is posted on Readthedocs.io.\nWebinars \u0026amp; Presentations We presented a webinar about Stork on May 6, 2020, in which we conducted a live demonstration. This and other recordings are available on the ISC presentations page.\nStork Micro Website We have prepared a Stork microsite with links to all relevant resources.\nRelease Model \u0026amp; License Stork has monthly development releases on the first Wednesday of each month (with some exceptions around holidays), although we may delay it a bit for last-minute changes.\nStork is released under the Mozilla Public License, version 2.0: https://www.mozilla.org/en-US/MPL/2.0\nDownload \u0026amp; Installation The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/stork/\nThe Stork source and PGP signature for this release may be downloaded from:\nhttps://downloads.isc.org/isc/stork\nSupport We support development of Stork and our other software through the sale of annual support subscriptions. Support for Stork is included with support for the ISC software it manages, Kea DHCP and BIND 9. Our sales team would be happy to give you a live Stork demo, or discuss your BIND and Kea requirements. Contact us for more information.\n","permalink":"/blogs/stork-0-8-released/","tags":["Stork","New Releases","Kea"],"title":"Stork 0.8 Released"},{"categories":["BIND"],"contents":"A Conversation with Quad9 We recently had an extended chat with the Quad9 team to discuss their experience with BIND 9. John Todd, Danielle Deibler, and Corey Mosher were kind enough to spend nearly two hours talking with us.\nQuad9 operates a large public resolver at 9.9.9.9, of course, but also 9.9.9.10, and 9.9.9.11. They use multiple open source DNS software systems to resolve queries. Incoming queries are first triaged by dnsdist from PowerDNS, then routed to either Unbound, PowerDNS Recursor, or BIND 9. Their systems are deployed in 152 POPs around the world; typically these are operated by Packet Clearing House in Internet IX locations. PCH provides Quad9 with co-location, transit, and local hands support.\nWho are Quad9’s users? Quad9 has gotten a lot of coverage in the technical press. They are listed in many FAQs and How-Tos and are easily findable via search engines. However, although they have an Android app, most of Quad9’s users are not private individuals, but are smaller and medium-sized ISPs and enterprises where a single decision can shift a large number of users. John estimated that 85% of their traffic is forwarded to them by ISP or enterprise forwarding caches.\nThe Quad9 service was established as a non-profit, privacy-protecting open DNS resolver. Their very explicit privacy policy is a big draw for users concerned about privacy. However, John thinks that most of their enterprise users are really attracted by Quad9’s security filtering that blocks malware and phishing sites.\nRequirement for EDNS Client-Subnet Identifier One of their first big users was the City of New York, which is using the Quad9 service for both their free city-wide public wifi and within city agencies. The public wifi project required EDNS Client Subnet Identifier, a DNS feature that uses the end user’s subnet information to return an address that may be customized for that location. This feature was developed for efficient access to local content cached by CDNs. ECS introduces additional privacy concerns, because the end-user subnet information is shared upstream, so Quad9 requires an explicit opt-in for the service.\n\u0026ldquo;We needed a highly robust implementation of ECS, that had a good implementation of the standards, so that was the reason we used BIND.\u0026rdquo; - Danielle Deibler\nQuad9 is running the BIND 9 Subscription Edition, which is a limited-access version of BIND that implements EDNS Client-Subnet Identifier. Queries from users who opt-in to the ECS service are routed to 9.9.9.11, where they are handled by BIND resolvers. ISC recommends that users deploying ECS implement a white list to control where the additional information is included. In the case of the massive Quad9 service, managing a whitelist was too onerous, so they include the ECS information with every query handled by the 9.9.9.11 service.\nThe main technical reason for whitelisting destinations for EDNS client-subnet information is that, until quite recently, some DNS systems didn’t understand this information; it could cause them to send back errors, or even fail to respond at all. ISC’s Mark Andrews has been lobbying for improved EDNS compatibility in the industry for years, and in 2019, a group of DNS software developers, commercial vendors, and operators collaborated in a joint effort to force a “clean-up” of these non-compliant systems. According to Quad9, DNS Flag Day 2019 helped tremendously with reducing the problems they saw with EDNS compatibility.\nAlso called their “CDN Friendly” resolver service, Quad9’s 9.9.9.11 service also supports DNSSEC validation. ISC recommends DNSSEC validation as a best practice, so we were glad to hear they are offering this to all their users.\nTechnical Feedback We wondered if Quad9 had seen any technical problems with their BIND 9-based ECS service, since the ECS feature is very complex. It is literally designed to increase the resolver cache, because it stores customized responses on a per client-subnet basis. Operating an ECS resolver at a huge scale could potentially consume a lot of resources, so we pressed Quad9 for details. However, although the Quad9 team uses instances with limited memory for the ECS implementations,they haven’t had any problems. They tested the BIND ECS implementation pretty thoroughly before they deployed it and found it to be very solid. (We can\u0026rsquo;t resist sharing Corey\u0026rsquo;s compliment below.)\n“Your ECS implementation just worked right out of the box. That\u0026rsquo;s it!\u0026quot; - Corey Mosher\nThe resolver ECS feature is popular among our BIND support subscribers. Some larger enterprises are using ECS to customize answers for internal DNS on a per-department basis. Another popular application is similar to the Quad9 application, forwarding some traffic to a resolver service with extra security filtering.\nFinally, we discussed plans for the future. Quad9 has been running a DNS over TLS service (the new standard for encrypted DNS) since 2017. ISC is in the process of developing this feature in BIND, and is committed to releasing it in 2020. We are interested in benefiting from Quad9’s operational experience, and in collaborating on the continued development of standards for DNS privacy and encryption.\nIf you think that your organization could benefit from the features of ECS in BIND 9 the way Quad9 does, please contact us for a support subscription quote.\nFrom the Quad9 FAQ What is EDNS Client-Subnet?\nEDNS Client-Subnet is a method that includes components of end-user IP address data in requests that are sent to authoritative DNS servers. This means that there is privacy “leakage” for recursive resolvers that send EDNS Client-Subnet data, where components of the end user’s IP address are transmitted to the remote site. While this is typically used to improve the performance of Content Distribution Networks, we have determined that Client-Subnet data falls into a grey area of personally identifiable information, and we do not transmit that data in our default service. In some circumstances, this may result in suboptimal routing between CDN origins and end users. We do support a secure service that sends Client-Subnet data.\nSecure IPv4: 9.9.9.11 Provides: Security blocklist, DNSSEC, EDNS Client-Subnet sent. If your DNS software requires a Secondary IP address, please use the secure secondary address of 149.112.112.11.\nSecure IPv6: 2620:fe::11 Provides: Security blocklist, DNSSEC, EDNS Client-Subnet sent. If your DNS software requires a Secondary IP address, please use the secure secondary address of 2620:fe::fe:11.\nAbout the BIND 9 S-Edition The BIND 9 S-Edition is available to ISC support subscribers only. We provide it in both source code and pre-packaged images. The BIND 9 S-Edition is based on an extended-support version for stability, with added commercial features like ECS from our private repository, and other selected new features and fixes backported from our leading edge branch. For more information about how to subscribe for BIND support from ISC, including the BIND Subscription Edition, contact us at info@isc.org.\n","permalink":"/blogs/quad9-2020-06/","tags":["BIND"],"title":"Quad9 and the BIND 9 Subscription Edition"},{"categories":["BIND","Security"],"contents":"ISC is releasing updated versions of BIND 9 to address two newly-discovered security vulnerabilities We have released new versions of BIND: 9.16.3, 9.14.12 and 9.11.19, which address two vulnerabilities just disclosed. New versions are available for download from https://www.isc.org/download/ and from https://ftp.isc.org/isc/bind9/cur/.\nIn addition, updated versions of the BIND 9 packages ISC produces are posted.\nPackages for CentOS and Fedora are on COPR. Packages for Ubuntu are on Launchpad. The two vulnerabilities are CVE 2020-8616 and CVE 2020-8617. Both are High Severity vulnerabilities that we recommend operators patch as soon as possible. Most currently supported versions of BIND 9 from ISC are vulnerable to these two issues.\nCVE 2020-8616 affects recursive resolvers only, and is a vulnerability to an amplification attack. CVE 2020-8617 affects both recursive resolvers and authoritative servers and is an assertion failure.\nFor more details, please consult the official vulnerability announcements linked above and below.\nKey references BIND does not sufficiently limit the number of fetches performed when processing referrals - https://kb.isc.org/v1/docs/cve-2020-8616 A logic error in code which checks TSIG validity can be used to trigger an assertion failure in tsig.c - https://kb.isc.org/v1/docs/cve-2020-8617 FAQ and Supplemental Information for CVE-2020-8617 - https://kb.isc.org/v1/docs/cve-2020-8617-faq-and-supplemental-information We announce significant BIND 9 vulnerabilities on the bind-users list, in accordance with our published Software Defect and Security Vulnerability Disclosure Policy. To be notified of vulnerabilities when they are published in the future, please consider subscribing.\n","permalink":"/blogs/bind9-vulnerabilities-2020-05/","tags":["BIND","CVE","Security"],"title":"Two BIND 9 Security Vulnerabilities Announced Today"},{"categories":["BIND"],"contents":"Is increasing complexity inevitable? The worldwide DNS system is very stable and scalable, but the software underlying it is extremely complex. At ISC we kind of enjoy mastering the intricacies of the DNS. BIND\u0026rsquo;s most enduring competitive strength may even be our feature-completeness. However, we know that complexity is really the enemy of stability and performance.\nISC has long endeavored to make BIND 9 a \u0026lsquo;reference implementation\u0026rsquo; of the IETF DNS standards, but in recent years we have been hearing from our users that more is not always better. Our colleague Bert Hubert from PowerDNS coined the term \u0026ldquo;the DNS Camel\u0026rdquo; in a presentation at an IETF DNS Operations working group meeting, in which he protested that the Internet community should stop increasing the complexity of DNS software implementations by constraining the impulse to create additional Internet Standards for the DNS.\nHow did the DNS become so complicated? The Domain Name System - and the BIND 9 software implementation - was first developed at a time when the Internet infrastructure was very different than it is today. Some BIND 9 options reflect that earlier architecture. The dial-up options probably fit into that category, although it is possible they are still useful in some of the more remote corners of the Internet. Soon, additional new network transport protocols, including HTTPS, TLS, and possibly QUIC will further complicate the DNS. In the early days of BIND, multi-user operating systems were more heterogenous. BIND implemented some low-level features that were missing in some of the operating systems of the time, even implementing assembly code to adapt to OS variants. Modern operating systems have since obsoleted the named implementations of those features. We are remediating that complexity with refactoring, such as the recent refactoring in BIND 9.16 to use the libuv library in place of named network code. While adopting other open source components simplifies the BIND 9 code, it does not necessarily simplify the situation for packagers and operators, as the issues that have arisen with availability of libuv have shown. The pressures of combatting increasing abuse have spawned many additional features to make the DNS more resilient. Examples in BIND include Response Rate Limiting, Fetches per Server, Response Policy Zones, DNS cookies, Refuse ANY, Pre-fetch and Access Control Lists. At the same time, the DNS has scaled up as the Internet use has grown exponentially. This has brought multi-processor systems with multithreaded high performance software, and constant streams of zone updates with incremental zone transfers, and TTL limits. Modern innovations like Content Delivery Networks, Software as a Service platforms, special DNS filtering services and other marketing imperatives have driven a plethora of new features, like CNAMES, EDNS Client-Subnet Identifier, and NXDOMAIN redirection. Is it possible to reduce software complexity? Of course it is possible!\nReducing the complexity of BIND 9 requires a combination of refactoring and simplifying obsolete code, and removing obsolete features. The BIND 9 development team has invested quite a bit of effort since 2017 in rewriting some functions, but we haven\u0026rsquo;t yet removed many features. ISC has published a process for removing features from BIND 9; per that policy we first will ask for user input when we propose to remove a supported feature. We have actually removed a few features, but as you can see from the 980-line long list of options in BIND 9 there are still an enormous number of features remaining.\nWhat kind of features are easy candidates for removal? ISC participates in DNS Flag Day, an effort to remove complexity due to workarounds to adapt to incompatible implementations. These can require that named supervise communications and catch non standards-compliant messages and adjust to them with exceptional behavior. We have been removing code that was explicitly added to enable specific obsolete operating systems, such as ULTRIX and Windows 32-bit. Some features were added with the intent that they would be a transitional bridge as new technology was phased in; for example, the DNSSEC Look-aside Validator was planned as a transition feature during the early adoption of DNSSEC. Multiple features were added to ease the introduction of IPv6. There is always going to be a debate about when these transitions end, but these features are clearly fair game. A few features have proven to be inadvisable. For example, we have removed support for some insecure encryption algorithms (DNSSEC Algorithms 1, 3, 6, and 12, aka RSAMD5, DSA, DSA-NSEC-SHA1, and ECC-GOST.) Fed up with complexity? Help us identify little-used features and options to remove! At ISC we will continue to retire features that we think are obsolete. However, it is much harder to remove features than you might think. For any given feature you think is useless, there is someone on the Internet using it. You can help by identifying features you don\u0026rsquo;t use, or that you think are inadvisable.\nSurvey Our colleagues at CZ.NIC are running a survey of open source DNS operators to try to determine which features are really in widespread use, and which can perhaps be decremented with little impact. Please help open-source vendors by providing feedback in their survey.\nIf you don\u0026rsquo;t want to complete the whole survey, consider at least answering the question about remote telemetry. If we had some data about what features are actually in use, that would be invaluable.\nIf you can\u0026rsquo;t be bothered with a survey at all, consider suggesting some of your favorite lame duck features as candidates for removal by emailing the bind-users list.\n","permalink":"/blogs/dns-complexity-survey-2020/","tags":["BIND"],"title":"Reducing the Complexity of the DNS"},{"categories":["Stork","New Releases","Kea"],"contents":"ISC is delighted to announce the release of Stork 0.7. The long-term goal of the project is to provide a scalable monitoring and management solution for both BIND 9 and Kea DHCP. This is a new project, but we feel it is ready for use now for monitoring Kea. The BIND support in Stork is not quite ready for use yet; we are planning to change the way the data exporter to Prometheus works, and there are not many statistics available for BIND yet in the dashboard.\nStork consists of several components. The Stork Agent runs on the servers you wish to monitor and streams data to a Prometheus time-series database as well as to a Stork application. The Prometheus database feeds data to a Grafana data visualization system. The Stork application is a modern, responsive web application with a RESTful API, that uses a PostgreSQL database. It needs a local agent on the monitored server to collect and report statistics on the platform, as well as the ISC software application. We chose to use the popular Prometheus and Grafana open source utilities to provide very flexible and scalable visualization of your traffic data over time. The Stork application itself can integrate all of this data and present a unified view across your entire deployment of Kea and BIND servers.\nThe features provided in this version are:\n1. Dashboard Stork features a main dashboard which provides all the most essential information: the top 5 subnets and networks with highest pool utilization; overall statistics; the status of your Kea and BIND 9 applications, including important details such as online and HA status; and more. If Grafana is configured, a dedicated link to show statistics for each subnet and network is shown.\nStork Dashboard 2. Kea Subnets \u0026amp; Shared Networks Stork can retrieve and display the IPv4 and IPv6 subnets specified in each Kea instance it monitors. Beside subnets, Stork is able to retrieve shared network information, understand which subnets belong to it, and display that information in a meaningful way. The shared networks view shows an aggregated list of all networks configured, on all the Kea servers that Stork monitors.\nStork calculates pool utilization based on lease statistics fetched from Kea. The pool utilization is shown for subnets in the \u0026ldquo;subnets\u0026rdquo; and \u0026ldquo;shared networks\u0026rdquo; views, and for whole shared networks as well.\nBoth views (subnets and shared networks) allow users to filter subnets and networks based on their various attributes.\nStork DHCP Subnets Stork DHCP Shared Networks 3. Kea Host Reservations Stork is able to retrieve and display host reservations from Kea. This covers host reservations defined in subnets as well as global ones.\nKea can store reservations in two ways: (1) in a configuration file (which is suitable for smaller, more static deployments) or (2) in a database (which is better for larger, dynamic environments). Stork can display reservations coming from both sources and show the source for that reservation. To get the information from a database, the Kea host_cmds hook must be installed.\nStork DHCP Host Reservations 4. Kea High Availability Status Most production DHCP services require High Availability (HA). Kea HA status is detailed but easy to understand in Stork; the entire widget is color-coded (green/yellow/red) to indicate status at a glance. Details on each server include online status, when it was last in contact with its peer, what the current HA state is, and more. There\u0026rsquo;s also a link to switch the view between a server and its partner.\nKea HA Status on Kea App Page Kea HA Status on Dashboard 5. Global Search Global Search can look through all data available for an indicated substring. This is particularly useful if you want to look for anything related to an IP or MAC address. You may get a subnet, a reservation, and/or a network. If you search for a specific MAC address or part of one, Stork returns a list of matching reservations.\nStork Global Search 6. Prometheus and Grafana Integration Prometheus and Grafana are well-known solutions for infrastructure monitoring. Stork integrates with Prometheus and Grafana on both the Stork agent side and the Stork web UI.\nThe Stork agent retrieves statistics from Kea and BIND 9 and exposes them in a format that can be received by Prometheus.\nGrafana templates for both Kea DHCPv4 and BIND 9 have been defined.\nThere is a simple configuration menu where a link to Grafana can be set. If configured, links to specific subnets are shown in the dashboard and the list of subnets.\nStork Kea DHCPv4 Dashboard in Grafana Stork BIND DNS Dashboard in Grafana - named Process Stats Stork BIND DNS Dashboard in Grafana - DNS Stats 7. Help System A context-sensitive menu system has been implemented. It is basic and covers only part of the UI, but we plan to expand it over time.\nStork Help System Community Source Code The main source code repository for Stork is in ISC\u0026rsquo;s GitLab: https://gitlab.isc.org/isc-projects/stork. It is mirrored on github.\nThe Stork 0.7.0 source code tarball is available on the ISC downloads page as well as on the ISC FTP server. ISC is also providing RPM and deb binary packages for Stork. They are available in our package repositories on Cloudsmith.\nIssues \u0026amp; Mailing List We have set up a stork-users mailing list. We encourage users to test Stork and report back their findings on the mailing list. Issues can also be reported on our GitLab page.\nDocumentation The Stork Administrator Reference Manual is posted on Readthedocs.io.\nScreenshot Gallery We have published a gallery of screenshots to let prospective users take a quick look at what they can expect.\nWebinars \u0026amp; Presentations We presented a webinar about Stork on May 6, 2020, in which we conducted a live demonstration. This and other recordings are available on the ISC presentations page.\nStork Micro Website We have prepared a Stork microsite with links to all relevant resources.\nRelease Model \u0026amp; License Stork has monthly development releases on the first Wednesday of each month (with some exceptions around holidays), although we may delay it a bit for last-minute changes.\nStork is released under the Mozilla Public License, version 2.0: https://www.mozilla.org/en-US/MPL/2.0\nDownload \u0026amp; Installation The easiest way to install the software is to use native deb or RPM packages. They can be downloaded from:\nhttps://cloudsmith.io/~isc/repos/stork/\nThe Stork source and PGP signature for this release may be downloaded from:\nhttps://downloads.isc.org/isc/stork\nThe signature was generated with the ISC code signing key which is available at:\nhttps://www.isc.org/pgpkey\nThe installation, even if quite simple, is described in details in our documentation: https://stork.readthedocs.io/en/latest/install.html.\nSupport We support development of Stork and our other software through the sale of annual support subscriptions. Support for Stork is included with support for the ISC software it manages, Kea DHCP and BIND 9. Our sales team would be happy to give you a live Stork demo, or discuss your BIND and Kea requirements. Contact us for more information.\nSummary We are very happy about and proud of the release of Stork. We hope it will help you in monitoring and managing our open source software.\nPlease let us know how it works for you. Feel free to share your feedback on the stork-users mailing list. We would like to hear whether the documentation is adequate and accurate. Please open tickets in the Stork GitLab project for bugs, documentation omissions and errors, user interface suggestions, and enhancement requests. We want to hear from you even if everything worked.\nThanks again to everyone who assisted us in making this release possible.\nWe look forward to receiving your feedback.\n","permalink":"/blogs/stork-0-7-released/","tags":["Stork","New Releases","Kea"],"title":"Stork 0.7 Released"},{"categories":["Security","Open Source"],"contents":"After observing the use of this practice by some other open source projects, we here at ISC have decided to amend our security policy to allow, at our discretion, limited pre-announcement of pending security vulnerability disclosures. Full details are available in our ISC Software Defect and Security Vulnerability Disclosure Policy.\nThis is our first experiment with the new practice and by making this pre-announcement we wish to inform the BIND user community that the maintenance releases scheduled for Tuesday, May 19, 2020 will contain security fixes for two separate (unrelated) security vulnerabilities.\nFurther details about the vulnerabilities will be publicly disclosed at the time the releases are published next Tuesday. It is our hope that this pre-announcement will aid BIND operators in planning to respond to that disclosure when it occurs.\nIf your organization is not already an ISC customer receiving Advance Security Notifications (ASNs) up to five days before vulnerabilities are publicly announced, you may wish to contact us for more information about this valuable security service.\nIf you have feedback or questions concerning this new policy, please direct them to security-officer@isc.org.\n","permalink":"/blogs/security-policy-update/","tags":["security","bind","dhcp","kea"],"title":"Update to ISC's Security Policy"},{"categories":["ISC","Open Source"],"contents":"Here\u0026rsquo;s another short profile of one of our engineers, so our customers can learn a little more about us. We hope you find them as interesting as we do!\nAny ISC customer who has reached out to us for support services has probably interacted at least once with Cathy Almond, the senior support engineer and leader of ISC’s Support team. Cathy, who lived for many years just outside of London, now makes her home in Cornwall, England, with her husband.\nCathy joined ISC in 2009 and her many years of experience have been invaluable, both to her coworkers and to our support customers. Cathy has a degree in Mathematics from Oxford University and previously worked for HP before coming to ISC.\nCathy specializes in troubleshooting (anything!), but at ISC applies her skills and talents mostly to ISC BIND and ISC DHCP, and more recently to Kea DHCP.\nCathy loves working for ISC because her colleagues are smart, funny, and lovely people to work with. She says, \u0026ldquo;ISC is a great team and every day in Support is different; you never know what customers and users are going to throw at you!\u0026rdquo;\nIn her spare time, Cathy is an avid rower (scull and sweep and owner of her own single scull) and also sings in a local choir. Her most cherished title is \u0026ldquo;Moomin\u0026rdquo; (grandma) to Emilia Trillian, who was born in February 2020.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org and look forward to bringing you more employee profiles in the future.\n","permalink":"/blogs/isc-profile-almond/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Cathy Almond!"},{"categories":["BIND","Open Source","Webinars","Events"],"contents":"Have you ever used ping to troubleshoot issues with the Domain Name System? Well, don’t! Dig is one of the most effective tools to help understand the relationships within the distributed database of DNS, and it’s a much more appropriate resource than ping. Many engineers use inefficient methods to analyze DNS, but we can show you how simple and effective troubleshooting is with dig.\nThis webinar will be presented by Andreas Taudte, CEO of honest consulting GmbH. With his company, Andreas helps IT departments of larger enterprises get a rock-solid DNS and DHCP infrastructure by analysing and questioning the status quo.\nThe slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all BIND 9 administrators and any other interested users to register for the June 10 webinar at 1700 UTC/1 PM EDT/10 AM PDT, at https://us02web.zoom.us/webinar/register/WN_l0slo7skR_Gnshc05ir-Ig.\n","permalink":"/blogs/dns-dig-webinar-june-2020/","tags":["BIND","DNS","dig","Webinars"],"title":"Troubleshooting DNS with dig Webinar - June 10, 2020"},{"categories":["BIND","Open Source","Webinars","Events"],"contents":"EDNS Client Subnet Identifier (ECS), allows a DNS resolver to include information about the requesting client in the query, so that the authoritative system can customize the response. The BIND 9 Subscription Edition includes resolver support for ECS, which can be used with third-party authoritative systems that leverage ECS.\nECS was originally designed for high-bandwidth applications, like video streaming, to direct clients to a local cache. Application developers have seized on ECS as a way to enable other applications, such as selective forwarding. This tech talk, with ISC engineers Eddy Winstead and Evan Hunt, will explain what ECS is, how it works in BIND, and how it impacts the resolver cache. We will discuss some of the applications for ECS and take questions about our implementation.\nThe slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all BIND 9 administrators and any other interested users to register for the May 20th webinar at 1700 UTC/1 PM EDT/10 AM PDT, at https://us02web.zoom.us/webinar/register/WN_hiNuW3zOSCShuMrqBG3XGw.\n","permalink":"/blogs/ecs-webinar-may-2020/","tags":["BIND","EDNS","ECS","Webinars"],"title":"ECS Webinar - May 20, 2020"},{"categories":["ISC"],"contents":"In these scary and uncertain times, it\u0026rsquo;s more important than ever that we maintain and strengthen our interpersonal connections, especially when we can\u0026rsquo;t physically be together. And one very popular activity, since people have been locked at home, has been cooking and baking. So we here at ISC thought, \u0026ldquo;Why not create our own cookbook?\u0026rdquo;\nOur staff members are scattered around the globe and come from a variety of backgrounds and traditions, so we thought this could be an excellent way to let our customers and friends learn a little bit more about us as individuals, while also giving you some ideas about what to cook while you\u0026rsquo;re home. Fifteen of our co-workers (nearly half of the whole company!) contributed recipes that span a wide range of cuisines and complexity levels. Some of them are pretty involved and may not be practical if you can\u0026rsquo;t easily get to a well-stocked grocery store, while others have ingredients that are likely to already be in your pantry.\nThis is meant to be a fun, lighthearted distraction from the worldwide tragedy going on around us, so we hope you will take it in that spirit. And we would love for you to share your own recipes with us on Facebook or Twitter! We\u0026rsquo;re all in this together, and we will get through it.\nAll the best to our fans and friends. Be well!\n","permalink":"/blogs/iscs-global-pandemic-cookbook/","tags":["cookbook","pandemic","ISC"],"title":"ISC’s Global Pandemic Cookbook"},{"categories":["Kea","Open Source","Webinars","Events","Stork"],"contents":"See the new open source management application for Kea DHCP. Stork provides a simple graphical interface for monitoring status, activity, utilization and errors for one or many Kea servers.\nWe will give an overview of the application objectives, components and design, and then dive into a live demonstration of the current development version. We are hoping for a lively session with lots of questions, comments and discussion about what you would like to see in this new tool\nThe slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all Kea administrators and any other interested users to register for the May 6th webinar at 17:00 UTC/10:00 PDT, at https://us02web.zoom.us/webinar/register/WN_-B7kN5leTdqCPSjtAr96IQ.\n","permalink":"/blogs/kea-management-webinar-may-2020/","tags":["Kea","Stork","Webinars"],"title":"Stork Management Dashboard for Kea - May 6, 2020"},{"categories":null,"contents":"BIND 9 is striving for strict compliance with IETF standards. We believe current versions of BIND 9 comply with the following RFCs, with the caveats and exceptions listed in the numbered notes below. Note that a number of these RFCs do not have the status of Internet standards but are proposed or draft standards, experimental RFCs, or Best Current Practice (BCP) documents. The list is non exhaustive.\nRFC1034 RFC1035 [1] [2] RFC1101 RFC1123 RFC1183 RFC1521 [16] RFC1535 RFC1536 RFC1706 RFC1712 RFC1750 RFC1876 RFC1876 A Means for Expressing Location Information in the Domain Name System\nRFC1982 RFC1995 RFC1996 RFC1996 A Mechanism for Prompt Notification of Zone Changes (DNS NOTIFY)\nRFC2136 RFC2136 Dynamic Updates in the Domain Name System (DNS UPDATE)\nRFC2163 RFC2181 RFC2230 RFC2308 RFC2308 Negative Caching of DNS Queries (DNS NCACHE)\nRFC2539 RFC2606 [17] RFC2782 RFC2845 RFC2845 Secret Key Transaction Authentication for DNS (TSIG)\nRFC2874 [18] RFC2915 RFC2930 RFC2931 [5] RFC3007 RFC3110 RFC3123 RFC3225 RFC3226 RFC3363 [6] RFC3490 [7] RFC3491 (Obsoleted by 5890, 5891) [7] RFC3493 RFC3496 RFC3597 RFC3645 RFC4025 RFC4033 [18] RFC4033 DNS Security Introduction and Requirements\nRFC4034 RFC4034 Resource Records for the DNS Security Extensions\nRFC4035 RFC4035 Protocol Modifications for the DNS Security Extensions\nRFC4074 RFC4074 Common Misbehavior Against DNS Queries for IPv6 Addresses\nRFC4255 RFC4294 - Section 5.1 [8] RFC4343 RFC4398 RFC4408 RFC4431 RFC4470 [9] RFC4509 RFC4592 RFC4635 RFC4701 RFC4892 RFC4892 Requirements for a Mechanism Identifying a Name Server Instance\nRFC4955 [10] RFC5001 RFC5001 DNS Name Server Identifier (NSID) Option\nRFC5011 RFC5155 RFC5205 RFC5452 [11] RFC5702 RFC5936 RFC5966 DNS Transport over TCP - Implementation Requirements\nRFC5952 RFC5966 RFC6052 RFC6147 [12] RFC6303 RFC6303 Locally Served DNS Zones\nRFC6604 RFC6605 [13] RFC6672 RFC6698 RFC6742 RFC6725 [19] RFC6840 [14] RFC6844 RFC6891 RFC6891 Extension Mechanisms for DNS (EDNS(0))\nRFC6944 RFC7043 RFC7314 RFC7314 Extension Mechanisms for DNS (EDNS) EXPIRE Option\nRFC7344 [20] RFC7477 RFC7553 RFC7793 RFC7793 Adding 100.64.0.0/10 Prefixes to the IPv4 Locally-Served DNS Zones Registry\nM. Andrews RFC7830 [15] RFC7929 RFC8080 No longer supported\nRFC2536\nThe following DNS related RFC have been obsoleted\nRFC2535 (Obsoleted by 4034, 4035) [3] [4] RFC2537 (Obsoleted by 3110) [19] RFC2538 (Obsoleted by 4398) RFC2671 (Obsoleted by 6891) RFC2672 (Obsoleted by 6672) RFC2673 (Obsoleted by 6891) RFC3008 (Obsoleted by 4034, 4035) RFC3152 (Obsoleted by 3596) RFC3445 (Obsoleted by 4034, 4035) RFC3655 (Obsoleted by 4034, 4035) RFC3658 (Obsoleted by 4034, 4035) RFC3755 (Obsoleted by 4034, 4035) RFC3757 (Obsoleted by 4034, 4035) RFC3845 (Obsoleted by 4034, 4035) [1] Queries to zones that have failed to load return SERVFAIL rather than a non-authoritative response. This is considered a feature.\n[2] CLASS ANY queries are not supported. This is considered a feature.\n[3] Wildcard records are not supported in DNSSEC secure zones.\n[4] Servers authoritative for secure zones being resolved by BIND 9 must support EDNS0 (RFC2671), and must return all relevant SIGs and NXTs in responses rather than relying on the resolving server to perform separate queries for missing SIGs and NXTs.\n[5] When receiving a query signed with a SIG(0), the server will only be able to verify the signature if it has the key in its local authoritative data; it will not do recursion or validation to retrieve unknown keys.\n[6] Section 4 is ignored.\n[7] Requires \u0026ndash;with-idn to enable entry of IDN labels within dig, host and nslookup at compile time. ACE labels are supported everywhere with or without \u0026ndash;with-idn.\n[8] Section 5.1 - DNAME records are fully supported.\n[9] Minimally Covering NSEC Record are accepted but not generated.\n[10] Will interoperate with correctly designed experiments.\n[11] Named only uses ports to extend the id space, address are not used.\n[12] Section 5.5 does not match reality. Named uses the presence of DO=1 to detect if validation may be occurring. CD has no bearing on whether validation is occurring or not.\n[13] Conditional on the OpenSSL library being linked against supporting ECDSA.\n[14] Section 5.9 - Always set CD=1 on queries. This is not done as it prevents DNSSEC working correctly through another recursive server.\nWhen talking to a recurive server the best algorithm to do is send CD=0 and then send CD=1 iff SERVFAIL is returned in case the recurive server has a bad clock and/or bad trust anchor. Alternatively one can send CD=1 then CD=0 on validation failure in case the recursive server is under attack or there is stale / bogus authoritative data.\n[15] Named doesn\u0026rsquo;t currently encrypt DNS requests so the PAD option is accepted but not returned in responses.\n[16] Only the Base 64 encoding specification.\n[17] Not applicable to DNS server implementations.\n[18] Loading and serving of A6 records only. A6 records were moved to the experimental category by RFC3363.\n[19] RSAMD5 support has been removed. See RFC 6944.\n[20] Updating of parent zones is not yet implemented.\n","permalink":"/bind9rfc/","tags":null,"title":"BIND 9 RFC Compliance"},{"categories":["BIND","Security","Webinars","Events"],"contents":"Domain hijacking is becoming increasingly prevalent in the ever-evolving threat landscape. Since the start of this year, Spamhaus has observed an average of 100 hijacked domains a day, at one single domain registrar.\nLegitimate domains, with a positive reputation, are a highly sought-after resource by bad actors, and it could be your domain they hijack. From DNS hijacking to sending bogus emails to gain access to an owner\u0026rsquo;s registry account, innocent users are regularly being taken advantage of. The fallout from a hijacked domain is significant, with a potentially long resolution period.\nJoin ISC and Spamhaus to discover how big an issue domain hijacking has become. We’ll be sharing real-life examples, and providing advice to help users and networks mitigate, detect, and remediate against domain hijacking.\nThe slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all DNS administrators and any other interested users to register for the April 29th webinar at 17:00 UTC/13:00 EDT/10:00 PDT at https://zoom.us/webinar/register/WN_DAn5AH6vR1yODNDyddUBog.\n","permalink":"/blogs/spamhaus-domain-hijacking-april-2020/","tags":["BIND","Security","Webinars"],"title":"Domain Hijacking - Problem and Protection Webinar - April 29, 2020"},{"categories":["Kea","Open Source","Webinars","Events","Performance"],"contents":"Learn how ISC has been benchmarking the Kea DHCP server as we work on performance improvements.\nWe will describe our test bed, and the open source tools we are using to generate DHCP traffic. We will do our best to explain how this simulated traffic might differ from real network traffic. We will share the results of an on-going test to measure the impact of multi-threading improvements on Kea performance. You will also hear our latest estimates for the performance differential between using a local CSV file for leases, vs a database backend. Participants will be able to ask questions.\nThe test plan is described and images of the major results are documented in our Knowledgebase: https://kb.isc.org/docs/kea-performance-tests-17-multithreading .\nHTML images, that enable you to see more detailed data by clicking and zooming in, are located at https://www.isc.org/kea-performance/ .\nThe slides and a recording of the webinar will be available on our Presentations page once the webinar is complete.\nWe invite all Kea administrators and any other interested users to register for the April 22nd webinar at 17:00 UTC/10:00 PDT, at https://zoom.us/webinar/register/WN_Dls2zWlZT6ys7gllLwb_8g.\n","permalink":"/blogs/kea-performance-webinar-april-2020/","tags":["Kea","Performance","Webinars"],"title":"Kea Performance Testing and the Impact of Multi-Threading Webinar - April 22, 2020"},{"categories":["BIND","Performance"],"contents":"Now that the latest version of BIND 9 has been released, it is worthwhile to look at the performance enhancements that have taken place over the past couple of years.\nAs part of the development cycle of BIND, we carry out a set of performance tests. Our performance laboratory system (somewhat unimaginatively called \u0026ldquo;perflab\u0026rdquo;) is continually cycling through all configurations registered with it, building BIND and running a performance test. (Details of the system have been presented elsewhere (e.g. https://www.isc.org/docs/bellis-oarc-perflab.pdf) so it will not be described here.) In this way, as the development branches are updated with changes, the effect of those changes on performance is quickly visible.\nAmong the configurations tested, the following are especially important:\nRoot zone: As a number of the twelve root server operators run BIND on at least some of their servers, and given the crucial importance of the root zone to the global DNS infrastructure, we test the performance of BIND when serving a copy of the root zone. One thousand zones/one million zones: These tests simulate the environment of a web hosting provider, whose server might be serving zones for a number of customers. Each of the zones in the test is very small, comprising just an SOA, NS, A, and AAAA record. One million delegations: This configuration simulates a large top- (or second-level) domain, where the zone comprises almost exclusively delegations to the zones of subdomain owners. One million resource records: The opposite extreme to the one million delegation test. In that test, BIND returns referrals in response to queries; in this test BIND returns resource records from within the zone. Recursive: with the aid of a third machine acting as the authoritative server, this test checks how BIND responds when acting as a recursive server. As noted below, the figures obtained are for measurements of queries for names in the cache. 2021 Update: We are working on a new approach to testing recursive performance that should be a better reflection of real-world resolver performance, using captured real queries. We are not confident that the results shown below for resolver performance are predictive of what you will see in a live deployment.\nFor each scenario, the query set is one that will result in about 5% NXDOMAIN responses (with the exception of the root zone test, where the query set was taken from a real traffic stream, and contains about 2.8% NXDOMAINs).\nTo obtain a figure for the query rate that BIND can handle, perflab starts up the server under test. It then fires queries at it for 30 seconds and calculates a query rate. This 30-second measurement is repeated 30 times and a figure for the query rate (and standard deviation) is calculated. The first measurement is discarded from calculations. For recursive measurements, this allows the cache to be filled and so subsequent tests are measuring the same thing - the time taken to get a name from cache. But for all tests, it also allows the server to settle down - for all data structures to be created and reach their final size.\nThe series of measurements presented here is taken from those of current development branches, at a point a few days after the current set of releases was generated. Also included are the results for the 9.14 series, which (at the time of writing) has just reached its end of life; this is no longer actively tested, so the results here are from the last measurements taken, after the release of 9.14.11.\nThe graphs clearly show the improvement in performance from 9.11 through 9.14 to 9.16. Significant investment in performance improvements began in work for 9.12, where the removal of the old \u0026ldquo;additional cache\u0026rdquo; and the introduction of a new \u0026ldquo;glue cache\u0026rdquo; vastly improved performance in delegation-centric zones. Since then, a lot of work has been put into the refactoring of the network code. BIND was written in the days of slower machines and slower networks, and the model that suited it well then was in need of update. So 9.16 introduced the updated network manager, which as well as improving performance has made that area of the code more maintainable.\nSince the release of 9.16 there have been further internal changes that are still being assessed; the most recent changes appear to have led to a small decrease in performance of the main branch (from which the 9.17 releases are taken). This will be the subject of investigation over the months to come.\n","permalink":"/blogs/bind-performance-march-2020/","tags":["BIND","performance"],"title":"Comparative Performance Results of BIND Versions - March 2020"},{"categories":["ISC"],"contents":"Tips on Remote Working At ISC we have been working remotely for years. We have staff in Australia, Poland, the UK, France, the Czech Republic, Bulgaria, Brasil, Denmark, Austria, the Netherlands, Alaska, California, Delaware, New Hampshire, North Carolina, California, Virginia and Washington State.\nWe have a few key tools and practices that make this work for us. Some of these may be useful for others who are working remotely for the first time to slow the spread of Covid19.\nVideo conferencing. it is so much better than audio-only. We use Zoom. We wait until everyone expected is present, and that is easier to see with faces on screen. I think people pay more attention to the discussion when their faces are visible too. There will always be a few people who don’t want to use video, but as long as most people use it, you get the benefit. While we are waiting for the laggards to join us, we do what people usually do when waiting in a conference room - remark on each other\u0026rsquo;s hair, the weather, etc.\nWe use Doodle polls to determine available meeting times. We have a lot of timezones to consider. It is impossible to have meetings that span this many time zones without inconveniencing some people. So, we use Doodle Polls to determine not only what are convenient times, but what other times are possible. Also, as individuals, we appreciate the flexibility of working from home, so many of us do participate in meetings late in the evening and early in the morning. A few of our meetings alternate times every week so everyone can be included at least half of the time.\nMeeting agenda. We use Etherpad. We prepare an agenda ahead of time for every meeting. Typically whoever runs the meeting prepares the agenda. This is not a burden because we start by pasting in the contents of the Etherpad from the prior meeting, and then updating the issues, and status.\nEveryone signs in on the pad at the start of the meeting. That gets them participating, ensures that they have the pad up, and makes a record of who attended.\nMeeting Notes. We take notes during the meeting on the Etherpad. Anyone can make comments or notes. This particularly helps people like me, who are habitual interrupters. If I can’t restrain myself from making some comment, I can type it onto the pad, and avoid interrupting the speaker or the flow of conversation. Then when we get to that issue on the agenda, my comments can be discussed. We record decisions and action items on the pad. We never delete old Etherpads, and because we name them consistently with the meeting name followed by the date we can easily find the old Etherpads if we need to recall a decision or action item. This system isn’t perfect, because recording an action item on an Etherpad does not automatically create a task in any reminder system, but it works reasonably well for us.\nWe have an internal-only chat system. We use Mattermost. We migrated to Mattermost from Jabber because Mattermost preserves all the prior chat history, even if you were not logged in at the time. If you log on, say 6 hours after a conversation has concluded, you can still read the discussion. Considering that our staff is located in a lot of different time zones, it can feel like you ‘missed out’ on a conversation you wanted to be in, but at least you don’t have to be logged in all the time. Many of us are monitoring Mattermost all the time. This can be a distraction, and even addicting, but you can just shut the window or even log out if you need to concentrate on something, and the conversation will be there when you return. We have regular meeting Zoom links and Etherpad names pinned in each channel so they are easily found in a hurry.\nWe have separate channels for separate projects, and have a few channels for non-work-related stuff, like sharing photos. (Another reason for migrating to Mattermost was, you can share pictures.) I understand that \u0026lsquo;watercooler chat\u0026rsquo; has become something of a legal hot potato in some organizations, because what were previously casual asides made in the hallway are now written down, and if some of them are insensitive or offensive, they can be more so when preserved and widely shared. It may be helpful to have an internal code of conduct, and assigned moderators for chat channels to ensure this doesn’t get out of hand.\nWe have a single weekly News email to all staff summarizing the main news in each department. This way, everyone can feel up to date without having to see a bulletin board. This is not particularly specific to a remote work environment, but it is another thing that helps when people are not co-located.\nHow do you know people are working, if you can\u0026rsquo;t see them in the office? Sigh. I know some of you have much more restrictive work environments, but at ISC we don’t sweat about whether everyone is working enough hours. (We have a bigger problem with people not resting or taking time off, actually.) We have a system for recording our hours every week, but nobody is monitoring to see if you are online at any given time. I know some employers monitor chat as a proxy for determining if remote workers are on-line. That is … demeaning and insulting and I feel sad for people who have to work in that kind of environment. We do have on-call rota, of course, because we are providing 7x24 SLA-based support, but that is the exception. I know trusting that people are actually working can be a management hurdle for implementing remote working, but it really is not a problem if you focus on results rather than activity.\nDon\u0026rsquo;t you miss meeting in person? We do get together in person at least once or twice a year. We have an annual All-Hands meeting, and we meet up in smaller groups at conferences or project summits. We do actually share a lot of personal information in our group chat channels. People post pictures of food, kids and grandkids, pets, eagles, skiing and boat repairs. It is probably hard on new employees to have to come up to speed remotely but we generally hire people with a lot of experience, so that hasn\u0026rsquo;t been a big problem at ISC. You do have to make your own coffee, and snacking constantly while working can be a real health issue. However, remote working can be very family-friendly. Many of us are interleaving child care responsibilities with work time. Personally, I love not having to commute hours every day in heavy traffic. I can spread my working hours out from 7 AM to midnight, with breaks when I need them to run errands or take a walk. ISC\u0026rsquo;s remote work system works well for us, and it probably helps us recruit and retain some very effective staff. We are saving a few carbon emissions, not passing along viruses and getting work done. It might work for you.\n","permalink":"/blogs/remote_working_at_isc/","tags":["ISC"],"title":"Remote Working at ISC"},{"categories":["BIND"],"contents":"BIND 9.16.0 has been issued, beginning a new stable branch. New releases of BIND are available for download from our downloads page.\nBIND 9.11.16 and 9.14.11 are maintenance release versions of the existing 9.11 (ESV) and 9.14 release branches and contain the usual assortment of bug fixes and minor feature improvements. We plan to continue maintaining 9.11, which is our current Extended Support Version, through 2020, with security fixes through 2021. 9.16 replaces 9.14 as the new stable version. We had planned to EOL 9.14 at the end of Q1 2020, but we will extend that enough to allow a 3-month transition period for our users. During this time we will backport and issue security fixes as required. If you are running 9.14.x, we recommend you begin planning your migration to 9.16 now, and complete it within about 3 months. To review our published release plan, see this Knowledgebase article.\nThe bigger news in today\u0026rsquo;s set of releases is the release of BIND 9.16.0: the code refactoring, new features, and performance work which were done for the 9.15 experimental branch are now considered complete and stable enough to be moved to a production branch of BIND, capping major changes to that work and moving it to the new stable branch. Evan Hunt, one of the Senior Developers on the BIND 9 project, spoke about the refactoring effort at a recent conference. BIND 9.16 will be an especially long-lived version. Our plan is to declare 9.16 as our next Extended Support Version, after it has been in the field for a while.\nSignificant work included in the 9.16 branch includes:\nThe new Key and Signing Policy (KASP) feature provides simplified DNSSEC key and signing management using policies defined by the \u0026ldquo;dnssec-policy\u0026rdquo; statement. We hope this tool will significantly facilitate ongoing key and signing maintenance. Use of this tool will be covered in our 10-webinar series on DNSSEC with BIND 9.\nBIND\u0026rsquo;s networking system has been substantially reworked. This does not have user-visible impact yet, but subsequent releases and features will benefit from this change. In particular, we expect to realize significant performance improvements in the 9.16 branch.\nThe way that DNSSEC trust anchors are managed has been improved. Please note the updated CLI for this.\nDLV (Domain Look-aside Verification) has been deprecated since BIND 9.12. dlv.isc.org (the main service used by those previously relying on DLV, operated by ISC) was turned off in 2017. Support for DLV has now been removed from BIND 9 completely, shortening BIND\u0026rsquo;s validator by hundreds of lines of code and greatly reducing its complexity. Removing this feature required a multi-year process of notification, working with partners, and gradual deprecation to avoid disruption in the DNS.\nRead more about these new editions of BIND in their release notes:\n9.11.16\n9.14.11\n9.16.0\n","permalink":"/blogs/bind9.16.0_released/","tags":["BIND"],"title":"BIND 9.16.0, Stable Branch for 2020 and Beyond"},{"categories":["ISC","Company News"],"contents":"To let our community learn a little more about some of the people behind the software you use every day, we are presenting short profiles of some of our engineers and other staff members. We hope you find them as interesting as we do!\nOur Newest ISC Support Engineer Peter Davies, a Welshman who lives in Denmark, has 20+ years experience in network applications administration. Peter\u0026rsquo;s story of how he got involved in DNS may sound familiar to some of you: \u0026ldquo;I was working as a network tech at SAS and one day the boss threw the \u0026lsquo;DNS \u0026amp; BIND\u0026rsquo; book from Reilly at me and said, \u0026lsquo;You\u0026rsquo;re our DNS admin. You start Monday; read the book.\u0026rsquo; \u0026quot;\nPeter has been in networking since the days when there were other protocols besides TCP/IP - like XNS, DECnet, SNA, and Netware. He started with BIND 4.8.3 on Ultrix! Peter has worked for The Royal Bank of Scotland, SAS, CSC, Ikea, and Copenhagen Business School. He was quite excited to get a job at ISC: \u0026ldquo;Well, I never thought I would have a chance. Back in the day I used to think, \u0026lsquo;Wow. Think about working in a place like ISC!\u0026rsquo; \u0026quot;\nPeter is a native English speaker, but he has also picked up Welsh and \u0026ldquo;Scandinavian\u0026rdquo; (a mashup of Danish, Swedish, and Norwegian). He and his family live just outside of Copenhagen, where he reports it is calm and quiet. We know you will enjoy working with Peter and he will do whatever he can to assist ISC\u0026rsquo;s customers.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org and look forward to bringing you more employee profiles in the future.\n","permalink":"/blogs/isc-profile-davies/","tags":["Open Source","Profiles"],"title":"Meet an ISC Engineer - Peter Davies!"},{"categories":["BIND"],"contents":"We have recently identified several SELinux-specific issues with our BIND RPM packages. This (rather technical) blog post describes the issues we found, how we got down to the bottom of those issues, and the decisions we made based on our findings.\nRPMs, SCLs, LSMs\u0026hellip; For a while now, ISC has been publishing BIND RPM packages for supported versions of Red Hat Enterprise Linux and CentOS. Since these operating systems come with a default SELinux policy which relates to the named process, from the outset we wanted our packages to comply with that policy. Every SELinux policy contains rules which assign specific SELinux contexts to various files, directories, processes, sockets, etc. These contexts, which can be thought of as labels assigned to different entities, are then used for defining allowed interactions. Since SELinux rules defining contexts for files and directories are path-based and we package BIND in the form of Software Collections, which (among other things) means that our binaries are installed in different locations than those used by stock RHEL/CentOS packages, the binaries we ship in BIND SCLs are not automatically assigned proper SELinux contexts. Fortunately, it is possible to set up so-called file context equivalency rules, which allow SELinux rules specified for a certain part of the filesystem to be reused for a different part of the filesystem. That is what we do in the post-installation scriptlet of the isc-bind metapackage in order to apply the SELinux rules specified in the stock SELinux policy to the files and directories installed by our SCLs.\nThe BIND 9.15.6 Mystery Until recently, this worked like a charm. However, when we released BIND 9.15.6, it turned out that on CentOS 6, running /etc/init.d/isc-bind-named start no longer allowed named to be started\u0026hellip; unless SELinux policy enforcement was disabled (e.g. using setenforce 0). At the same time, everything was working fine on CentOS 7 and CentOS 8. This was something we had not seen before, so we had to find out what was happening.\nWhen SELinux is a suspected culprit of something not working, the first step is to\u0026ndash; no, not to disable it! The first step is to look at the contents of /var/log/audit/audit.log to see what is not working. In the case of BIND 9.15.6 on CentOS 6, the following AVC denial was telling:\n497 fsgid=497 tty=(none) ses=2 comm=\u0026#34;isc-worker0000\u0026#34; exe=\u0026#34;/opt/isc/isc-bind/root/usr/sbin/named\u0026#34; subj=unconfined_u:system_r:named_t:s0 key=(null) type=AVC msg=audit(1577961466.346:33): avc: denied { write } for pid=1831 comm=\u0026#34;isc-worker0000\u0026#34; path=\u0026#34;[eventfd]\u0026#34; dev=anon_inodefs ino=3853 scontext=unconfined_u:system_r:named_t:s0 tcontext=system_u:object_r:anon_inodefs_t:s0 tclass=file The key part of the above message was [eventfd] because it immediately rang a bell: the big change in BIND 9.15.6 was the introduction of the new network manager which adds a dependency on libuv and uses the latter instead of custom networking code. The purpose of this move was to simplify the source code while retaining portability and also to allow us to more conveniently implement new networking-related features, like DNS-over-HTTPS support. Anyway, on Linux, libuv\u0026rsquo;s event loop uses eventfds internally, so it is not surprising that named tried to use an eventfd upon startup. But why was that attempt blocked by SELinux only on CentOS 6?\nFirst, we verified that named_t processes are indeed not allowed to write to anon_inodefs_t files:\n# sesearch --allow --source named_t --target anon_inodefs_t Found 2 semantic av rules: allow named_t file_type : filesystem getattr ; allow named_t filesystem_type : filesystem getattr ; Since SELinux works on a deny-by-default basis and this denial is not triggered on CentOS 7 and later, our first guess was that something in the default SELinux policy changed between CentOS 6 and 7. Thus, we ran the same command on CentOS 7, but the result was surprising:\n# sesearch --allow --source named_t --target anon_inodefs_t Found 6 semantic av rules: allow system_bus_type filesystem_type : dir { getattr search open } ; allow named_t file_type : filesystem getattr ; allow named_t filesystem_type : filesystem getattr ; allow domain file_type : file map ; allow domain file_type : chr_file map ; allow domain file_type : blk_file map ; While there are indeed some differences in this part of the SELinux policy, none of the above rules allows named to write to an eventfd. Yet, everything was working as intended. So what was making it possible? Since SELinux policy enforcement happens in the kernel, another theory we had was that some kernel change was responsible for \u0026ldquo;fixing\u0026rdquo; the issue on CentOS 7+. To minimize the time needed to verify that theory, we used ELRepo to install kernel 4.4 on CentOS 6 while still using the stock SELinux policy shipped with that operating system. It turned out that rebooting with kernel 4.4 in use enabled /etc/init.d/isc-bind-named start to work again on CentOS 6 with SELinux in enforcing mode.\nSince 4.4 is much newer than 3.10 (the default CentOS 7 kernel), we could not be 100% sure that a change in kernel code was the root cause of our named issue being magically fixed in CentOS 7. Curiosity pushed us forward and thus we performed a git bisect session on the kernel source tree, trying to figure out which specific change in the kernel code was the culprit for the phenomenon we were observing. It turned out it was commit 3836a03d978e68b0ae00d3589089343c998cd4ff, a one-line change merged into Linux 2.6.33 (just one minor version later than 2.6.32, on which the CentOS 6 kernel is based):\ndiff --git a/fs/anon_inodes.c b/fs/anon_inodes.c index 9f0bf13291e5..2de009565d8e 100644 --- a/fs/anon_inodes.c +++ b/fs/anon_inodes.c @@ -209,6 +209,7 @@ static struct inode *anon_inode_mkinode(void) inode-\u0026gt;i_mode = S_IRUSR | S_IWUSR; inode-\u0026gt;i_uid = current_fsuid(); inode-\u0026gt;i_gid = current_fsgid(); +\tinode-\u0026gt;i_flags |= S_PRIVATE; inode-\u0026gt;i_atime = inode-\u0026gt;i_mtime = inode-\u0026gt;i_ctime = CURRENT_TIME; return inode; } The commit log message does a good job of explaining why this change was introduced:\nInotify was switched to use anon_inode instead of its own private filesystem which only had one inode in commit c44dcc56d2b5c7 \u0026ldquo;switch inotify_user to anon_inode\u0026rdquo;\nThe problem with this is that now the inotify inode is not a distinct inode which can be managed by LSMs. userspace tools which use inotify were allowed to use the inotify inode but may not have had permission to do read/write type operations on the anon_inode. After looking at the anon_inode and its users it looks like the best solution is to just mark the anon_inode as S_PRIVATE so the security system will ignore it.\nIt all made sense now. Previously, the SELinux policy allowed specific tools to access the inotify inode, which was a distinct inode type. After the inotify code had been updated to use a more \u0026ldquo;common\u0026rdquo; inode type, it was decided that instead of updating existing SELinux policies so that they matched this code change, the kernel should be modified to treat inodes of type anon_inode as \u0026ldquo;transparent\u0026rdquo; to Linux Security Modules like SELinux. In other words, no SELinux policy rule was required anymore to allow a confined process to access an anon_inode inode (like an eventfd). This is why BIND 9.15.6 is allowed to start on CentOS 7+, but not on CentOS 6.\nHow to Make BIND 9.15.6+ Work With SELinux on CentOS 6 Since CentOS 6 will reach end-of-life in less than a year, we decided not to add any workaround for this issue to our RPM packages. We believe that the number of users who would like to use a very recent version of BIND on an operating system released almost nine years ago and requiring SELinux in enforcing mode is very low. If you are such a user, you have a few options:\ninstall a custom SELinux policy module (see below), update to a more recent kernel, rebuild stock CentOS 6 kernel with commit 3836a03d978e68b0ae00d3589089343c998cd4ff reverted. To build a custom SELinux policy module containing the change required for BIND 9.15.6+ to run on a stock CentOS 6 kernel, start with putting the following contents in a file called isc-bind-named-centos6.te:\nmodule isc-bind-named-centos6 1.0; require { type anon_inodefs_t; type named_t; class file { read write }; } allow named_t anon_inodefs_t : file { read write }; Then build and install the module by running the following commands:\ncheckmodule -M -m -o isc-bind-named-centos6.mod isc-bind-named-centos6.te semodule_package -o isc-bind-named-centos6.pp -m isc-bind-named-centos6.mod semodule -i isc-bind-named-centos6.pp To verify that the module has been successfully installed, run:\n# semodule -l | grep isc-bind-named-centos6 isc-bind-named-centos6\t1.0 You should now be able to start named using /etc/init.d/isc-bind-named start on a stock CentOS 6 kernel with SELinux in enforcing mode.\nThe \u0026ldquo;SCL + systemd + SELinux\u0026rdquo; Pitfall While investigating the issue described above, we made another unsettling discovery. Here is what we observed on CentOS 7 after running systemctl start isc-bind-named:\n# ps -eZ | grep named system_u:system_r:unconfined_service_t:s0 4966 ? 00:00:00 named Compare this with CentOS 6 output after running /etc/init.d/isc-bind-named start:\n# ps -eZ | grep named unconfined_u:system_r:named_t:s0 2371 ? 00:00:00 named But we did everything according to the SELinux-related sections of the Software Collection Packaging Guide! So what was happening? This issue was caused by the way SCL-provided service binaries are started. When a user needs to start an SCL-provided application directly, they should prefix the command line with scl enable \u0026lt;scl-name\u0026gt; --. This was what we used in our systemd unit file for named and things seemed to work fine. However, we never checked whether the named process transitions to the expected SELinux domain (named_t).\nThe default RHEL/CentOS SELinux policy includes a set of so-called domain transition rules which specify which file contexts are entrypoints for which SELinux domains. The list of such rules for the named_exec_t file context (which is what the named binary is labeled with) that are present in the default CentOS 6 policy is:\n# sesearch --type --target named_exec_t Found 10 semantic te rules: type_transition system_dbusd_t named_exec_t : process named_t; type_transition glusterd_t named_exec_t : process named_t; type_transition cluster_t named_exec_t : process named_t; type_transition condor_startd_t named_exec_t : process named_t; type_transition initrc_t named_exec_t : process named_t; type_transition cobblerd_t named_exec_t : process named_t; type_transition NetworkManager_t named_exec_t : process named_t; type_transition openshift_initrc_t named_exec_t : process named_t; type_transition piranha_pulse_t named_exec_t : process named_t; type_transition init_t named_exec_t : process named_t; Let\u0026rsquo;s look at their CentOS 7 counterparts:\n# sesearch --type --target named_exec_t Found 15 semantic te rules: type_transition system_dbusd_t named_exec_t : process named_t; type_transition kdumpctl_t named_exec_t : process named_t; type_transition system_cronjob_t named_exec_t : process named_t; type_transition ipsec_mgmt_t named_exec_t : process named_t; type_transition crond_t named_exec_t : process named_t; type_transition initrc_t named_exec_t : process named_t; type_transition init_t named_exec_t : process named_t; type_transition condor_startd_t named_exec_t : process named_t; type_transition NetworkManager_t named_exec_t : process named_t; type_transition cluster_t named_exec_t : process named_t; type_transition piranha_pulse_t named_exec_t : process named_t; type_transition glusterd_t named_exec_t : process named_t; type_transition cobblerd_t named_exec_t : process named_t; type_transition dnssec_trigger_t named_exec_t : process named_t; type_transition openshift_initrc_t named_exec_t : process named_t; Even though these look similar to CentOS 6 rules, the SELinux aspect of starting services on CentOS 7 (with systemd) differs a lot from CentOS 6 (with its SysVinit scripts). When systemctl start \u0026lt;service\u0026gt; is run, the systemd process (PID 1) is told to spawn a process by running the command provided in the relevant unit file. PID 1 runs in SELinux domain init_t:\n# ps -eZ | grep systemd$ system_u:system_r:init_t:s0 1 ? 00:00:01 systemd Thus, since this rule in present in the default SELinux policy:\ntype_transition init_t named_exec_t : process named_t; when a process running in the init_t domain executes a binary labeled with the named_exec_t context, it should transition to the named_t domain, right?\nWell, yes - but the catch is that when the /usr/bin/scl enable ... construct is used in a systemd unit file, the binary which PID 1 executes is /usr/bin/scl - and that binary is labeled with:\n# ls -Z /usr/bin/scl -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/scl As you can see above, there is no transition rule which specifies bin_t as an entrypoint for the named_t domain. There is, however, a rule which specifies which domain a process should transition to if it is currently running in the init_t domain and it executes a bin_t binary:\n# sesearch --type --source init_t --target bin_t Found 1 semantic te rules: type_transition init_t bin_t : process unconfined_service_t; Found 1 named file transition filename_trans: type_transition init_t bin_t : dir cupsd_rw_etc_t \u0026#34;inf\u0026#34;; This is why using scl enable ... in a systemd unit file causes the service to be run as an unconfined one. A post from Dan Walsh\u0026rsquo;s Blog explains why this approach was chosen for CentOS 7 on.\nThis is different from what happens on CentOS 6. When init (PID 1), which runs in SELinux domain init_t, executes an init script (most of which - including /etc/init.d/isc-bind-named - are labeled with the initrc_exec_t context), it first transitions to the initrc_t domain:\n# sesearch --type --source init_t --target initrc_exec_t Found 1 semantic te rules: type_transition init_t initrc_exec_t : process initrc_t; When the init script invokes /usr/bin/scl (labeled with bin_t), no domain transition happens:\n# sesearch --type --source initrc_t --target bin_t Thus, /usr/bin/scl still runs in the initrc_t domain, which allows named to transition to the named_t domain when it is executed:\n# sesearch --type --source initrc_t --target named_exec_t Found 1 semantic te rules: type_transition initrc_t named_exec_t : process named_t; This is why the problem is not triggered on CentOS 6.\nFortunately for us, named does not need any special environment to be set up in order to start; the scl enable ... prefix can simply be removed from the ExecStart line of the systemd unit file and the service will still be able to start, even though the shared libraries it is linked against are installed in a non-standard location. This is possible thanks to the use of -rpath in the build process:\n# objdump -x /opt/isc/isc-bind/root/usr/sbin/named | grep RPATH RPATH /opt/isc/isc-bind/root/usr/lib64 Thus, we removed the scl enable ... prefix from the unit file shipped with BIND SCLs and this is no longer an issue with the latest versions of our packages. We also prepared a pull request for the Software Collection Packaging Guide so that the latter briefly mentions the issue at hand. Hopefully this will prevent other packagers from encountering the same pitfall. We also added new checks to our CI pipelines to ensure the problem does not silently reoccur.\nThird Time\u0026rsquo;s a Charm Shortly after releasing the next BIND version, 9.15.7, we received a report of a permissions-related issue which we missed in our testing because on CentOS 6 its symptoms were similar to the BIND 9.15.6 eventfd issue discussed above and it was not triggered on CentOS 7 because of the unconfined domain issue also discussed above. What happens is that named attempts to create Unix domain sockets in /tmp for the purpose of passing TCP sockets between threads. While cumbersome, apparently using uv_pipe structures (which are implemented using Unix domain sockets on Unix platforms) is what libuv needs to pass sockets between threads (not processes!) ever since functions allowing a simpler approach were removed. Unfortunately, named connecting to Unix domain sockets causes trouble when certain security mechanisms are in place. To avoid this problem, BIND 9.15.8 uses internal libuv functions in order to pass TCP sockets between threads without employing an IPC channel for that purpose.\nAs the above examples show, SELinux can be tricky to get right, but that does not mean your knee-jerk reaction to the issues it causes should be to disable it. As software evolves, SELinux rules confining it may need to be tweaked. Unfortunately, debugging differences in SELinux behavior between various operating systems may prove challenging due to the number of components involved (kernel code, policy in effect, labels used, etc.). Despite these obstacles, ISC will keep on trying to support SELinux in the BIND packages we publish. If you run into any further problems with our packages, make sure to let us know by opening a GitLab issue.\n","permalink":"/blogs/selinux-struggles-bind/","tags":["BIND","SELinux","RPM","CentOS"],"title":"SELinux Struggles with BIND Startup"},{"categories":null,"contents":" JPG PNG\nJPG JPG\nPNG PNG\nPNG PNG\nPNG JPG\nJPG JPG\nJPG JPG\nPNG PNG\nPNG PNG\nPNG PNG\n","permalink":"/isclogos/","tags":null,"title":"ISC Logos"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"Highlights At ISC, 2019 was a solid year of serving our open source users and the Internet community. We continued our engagement with the Internet infrastructure community through work with ICANN, IETF, DNS-OARC, ISOC, UKNOF, NANOG, APNIC and other fora. We joined with our colleagues in other DNS organizations to drive the first \u0026ldquo;DNS Flag Day\u0026rdquo; to clean up some lingering technical debt in the DNS system.\nWe added two new members to the BIND 9 team, Michal Nowak (based in Czechia) and Diego Fronza (in Brazil). The team completed another year of BIND 9 maintenance and development, modernizing BIND’s DNSSEC support, refactoring the network socket code, and establishing both a code of conduct and a new process for removing old features. A major update to Kea added a configuration backend, and we added monthly development releases while supporting a growing Kea support customer base. Our newest Kea team member, Razvan Becheriu (Romania), was an external contributor on the Cassandra backend until we hired him to work on our next big Kea project, multithreading!\nWe completed the transition to our new Gitlab system with the migration of ISC DHCP, so that we now have integrated issue tracking and development in the open for all our projects. We added many more ISC software packages for our users, in a new ISC Cloudsmith.io repository. We are publishing all of the BIND 9 restricted-access packages (pre-release security fixes and the subscription edition of BIND) and Kea (public plus restricted access) packages there. By making ISC packages we ensure that the latest versions of our applications, and all of their dependencies, are available on the latest stable versions of the major operating systems. We will continue publishing BIND 9 open source packages on the Copr and Launchpad sites, where they are easier for open source BIND 9 users to find.\nOur technical support business is stable and even growing modestly. We added 24 new customers in 2019 and only nine left us, for a total of 122 at the end of the year. We needed an additional support engineer and found a very experienced one, Peter Davies, in Copenhagen.\nContinuing our tools update from 2018, we migrated from WordPress to a new static web site. The change was motivated by our desire to avoid possible security breaches on the website, and also to improve performance. Our first-ever systems administrator for development systems, Emanuel Petr (Czechia) has been a huge help with this. In the process we also tried to remove all Google tracking tags, a privacy improvement we eventually had to reverse as our website virtually disappeared from the Internet.\nWe retired ISC’s long-running Domain Survey, which was becoming irrelevant with the adoption of IPv6, and ended the legacy SNS-PB service. SNS-PB was a free network service that ISC provided for over a decade to non-profits and smaller TLDs. Now that secondary name services are now widely available from other providers we would rather focus our network resources on managing F-Root.\nSpeaking of F-Root, with the retirement of Brian Reid as Director of Network Operations, we reorganized our F-Root and Technical Operations staff. Ray Bellis took over as acting director, and we added a Network Engineer, Elmar Bins, who works from Austria.\nWe are always looking for opportunities to contribute to open source. We are a long-time user of the open source Etherpad software. Ray Bellis worked on a major refactoring of the code base which formed the bulk of the 1.8.0 release. Ray also published an open source framework for a RADIUS server in Node.JS - https://www.isc.org/blogs/radius-framework/. This implements the RADIUS protocol but does not supply the business logic.\nThank you Our greatest thanks go to the loyal open source users who have continued to support ISC, many of whom have been support customers of ours for more than a decade. These include some of the most capable independent operators and enterprises in the world.\nWe also owe a debt to all the wonderful open source users who have submitted feedback and patches, and the open source packagers who maintain distributions for our users. We regard many of these collaborators as our friends.\nIn 2019 we received generous unrestricted donations from Verisign and Craigslist, and Mozilla agreed to underwrite development of the DNS over HTTPs support in BIND 9. A few individuals have also made small financial contributions to ISC, via Patreon or directly.\nWe now have a total of 35 staff members, in 11 countries (the US, Brazil, the UK, France, Austria, Denmark, Poland, Romania, Czechia, Australia, and the Netherlands, if you\u0026rsquo;re keeping score). Including managers, there are eight on the BIND team; seven on the DHCP team; five in Support; six in Sales and Marketing; four in Finance, Facilities, and Exec; and five in F-Root and Technical Operations. We also have two part-time consultants.\nMemorial We sadly note the passing of former ISC employee Kurt Lidl, who worked on BIND from ~2001-2003. Several of us worked with Kurt in other capacities, including at UUNET Technologies, and knew him as a dear friend and co-worker. He was a FreeBSD user and advocate of many years standing, and co-authored RFC2516 which put PPP over ethernet. Kurt was smart, funny, and thoughtful, and we will miss him.\nLooking ahead to 2020 Headquarters Move ISC is moving out of our long-time headquarters, at an old warehouse at 950 Charter Street in Redwood City, CA. We are working to move out by April 30, 2020. Most of our staff already work remotely, but vacating the HQ means relocating our remaining network facilities, records, and business systems elsewhere. We are primarily a virtual company, with contributors in 11 countries and seven US states, but will be headquartered in the New England region of the US, in an old mill building. The physical address will be 53 Main Street, Newmarket, New Hampshire, and our new mailing address will be PO Box 360, Newmarket, NH 03857.\nProjects We started a new open source project, Stork, which will be a graphical dashboard and management application for BIND 9 DNS and Kea DHCP. We expect to do our first public release in the first half of 2020, but anyone who is very eager can follow the development on our GitLab.\nOur goal with Stork is to provide a more complete solution for DNS and DHCP, enabling more organizations to use the open source by relieving them of some of the burden of developing or integrating their own management solution. We are leveraging two very popular open source projects, Prometheus and Grafana, for Stork’s time-series data store and data visualization, and using the modern Angular framework for the GUI.\nWe are excited to be adding HTTPS and TLS transports to BIND in 2020, to enable both leading options for encrypted DNS.\nThe Kea team, in addition to driving the Stork project, is hard at work on enabling multi-threading to take Kea performance to a new level.\nProject Details BIND and DNS ISC staff continue to participate vigorously in DNS community events and organizations. In 2019, Ondřej Surý was re-elected to the Board of Directors of DNS-OARC. Cathy Almond continues her role on the Programme Committee for UKNOF, and ISC sent speakers to these and other events in 2019.\nISC staff authored two IETF drafts that were approved as RFCs in 2019:\nRFC8482 Providing Minimal-Sized Responses to DNS Queries That Have QTYPE=ANY, by J. Abley, O. Gudmundsson, M. Majkowski, E. Hunt; and RFC8490 DNS Stateful Operations, by R. Bellis, S. Cheshire, J. Dickinson, S. Dickinson, T. Lemon, T. Pusateri. Mark Andrews has been working for several years to identify and alert operators with DNS systems that do not fully support the EDNS protocol. The open source development community adopted this campaign as the first target for a DNS-wide cleanup event, marketed as \u0026ldquo;DNS Flag Day\u0026rdquo; in 2019. This is almost ironic, as EDNS was designed as a feature that should never have required a \u0026ldquo;Flag Day\u0026rdquo; because it was backwards-compatible. What had happened was that over a period of years, the DNS had added more features that relied on EDNS, and developers had added a forest of inefficient workarounds to adapt to non-compliant EDNS implementations. With the community support of most of the open source developers and many of the largest hosted-service providers for removing those workarounds, DNS Flag Day 2019 snowballed into a fairly high-profile event by the time the February \u0026ldquo;deadline\u0026rdquo; came around.\nISC was taken by surprise by how popular our hosted EDNS compliance checker became. After being available for several years online it suddenly got hammered with traffic, as the news about the Flag Day caught on in the technical press. We had to rate-limit use of the tool because some operators were testing every one of their zones in this very lightly provisioned online tool.\nThe controversy over the deployment of open DNS resolvers running DNS over HTTP (DoH) accelerated the debate about the impact of DoH on the overall security of the DNS, and about a shift in control and access to end-user data to hosted providers. ISC joined the new Encrypted DNS initiative. This group is beginning a discussion on work that needs to be done to promote and enable deployment of encrypted DNS.\nBIND 9 We published a policy for gradually removing obsolete BIND 9 options, including community consultation for major changes, option deprecation, and then finally removal of the option. Please read our KB article for more details. We updated ISC’s security vulnerability handling policy to reduce the number of minor security risks that result in a CVE. Details are in this KB article. We established a new Code of Conduct for the BIND 9 project, which applies to speech both on the bind-user mailing list and in our open Gitlab environment. Releases\nDevelopment: 9.13.6, 9.13.7; 9.15.0 - 9.15.7 (10 releases)\nStable: 9.12.4; 9.14.0 - 9.14.9 (10 releases)\nESV: 9.11.6 - 9.11.14 (10 releases)\nThe major refactoring project for 2019 was replacing the BIND 9 network socket code with the popular open source libuv library. This will simplify code paths, remove one factor limiting performance, and facilitate using other transport layers (including TLS, HTTPS) in the future.\nWe also implemented a new dnssec-policy key and signing policy (KASP) for zones, enabling automatic key regeneration and rollover.\nFeature changes (the complete log of changes is here: https://gitlab.isc.org/isc-projects/bind9/-/blob/master/CHANGES)\nWe added support for the new GeoIP2 geolocation API. We added improvements for users resigning large zones - spreading out the signing more evenly, adding new statistics on signing timing. NSEC aggressive use, formerly enabled by default, is now disabled by default because it was found that it could take longer to assemble an answer from stored NSEC records than it took to re-query in some cases. We implemented the new New DNS Cookie algorithm - siphash24 - replacing the old HMAC-SHA DNS Cookie algorithms. We added YAML output for dig, mdig, and delv. We deprecated Windows 32-bit support and the DNSSEC Look-Aside Validator (DLV). We improved DNSSEC trust anchor configuration using the trust-anchors statement, permitting configuration of trust anchors in DS as well as DNSKEY format. (DNSSEC managed-keys was replaced with trust-anchors plus initial-key, and DNSSEC trusted-keys was replaced with trust-anchors plus static-key.) We added validate-except Permanent Negative trust anchors. DNSSEC validation is enabled by default now. We finally deprecated the DLV (DNSSEC Look-Aside Validator) from BIND CVEs: We published 9 BIND 9 security vulnerability notices in 2019.\nThanks to our code contributors! Tony Finch is again our most prolific external contributor, with 5 entries in the changelog for 2019. Petr Mensik made two contributions. Thomas Jach made 2 commits. Timothe Litt made 2 commits. Samuel Thibault and Sergei Trofimovich are new contributors for 2019, with one commit each. DHCP development (Kea and ISC DHCP) Tomek Mrugalski continues as co-chair of the DHC working group in the IETF, although active work in that group is winding down.\nISC DHCP migrated to GitLab in 2019 (ISC DHCP was the last ISC open source project to migrate). We gained a number of new DHCP support subscribers and had a very busy support year.\nKea releases in 2019: Kea 1.6.0, 1.6.1, 1.7.0, 1.7.1, 1.7.2, 1.7.3\nThe major development efforts in Kea in 2019 involved providing a configuration database, enabling centralized control and reuse of complex configuration such as subnet definitions, and designing and developing Kea to evolve into a much higher-performance, multi-threaded application. Although DHCP performance is not a significant issue in the enterprise, we are seeing increased adoption of Kea among service providers, many of whom need to provide addresses for millions of devices.\nMajor features added in Kea include: Database - Configuration backend in MySQL Client Classification - \u0026ldquo;drop\u0026rdquo; class provides a kind of trash can for unwanted queries Max/min - lease-lifetime, a popular feature of ISC DHCP, was added to Kea Renew/Rebind timers are now calculated automatically We migrated our Kea Administrative Reference Manual (ARM) to the Sphinx system, so that users could have live on-line documentation for multiple software versions via Read The Docs. See https://kea.readthedocs.io/en/latest/.\nWe announced and implemented a new Kea release model with Kea 1.6.0, adding development point releases.\nWe are still experimenting with the best way of making our Kea open source accessible but also financially sustainable. In 2019 ISC added a Kea Basic subscription offering which includes subscriber-only Kea premium software, but does not include technical support. When we migrated to a static website, we also migrated to a different on-line store, and this has enabled us to sell low-cost \u0026ldquo;subscription\u0026rdquo; products.\nThanks to our community code contributors\nFranciszek Gorski, our top external contributor with 19 commits, mostly improving Kea statistics. Jonatan Raudsepp submitted a fix for Alpine linux in Perfdhcp code. Kristoffer Larsen fixed a bug in Makefile. Brent Bloxam fixed a bug in kea-admin that assumed the PostgreSQL database was always local. We added some new KB articles on Kea:\nKea Performance Optimization Kea DHCPv6 Design Considerations Using the Kea Configuration Backend Understanding Client Configuration CVEs: We announced 3 Kea security vulnerabilities in 2019.\nISC DHCP releases\nWe continue to maintain ISC DHCP, while encouraging migration to Kea. We did not do any releases in 2019, but we did commit work in the open repository. As 2019 ends we are in beta for an ISC DHCP maintenance version, with 4.1-ESV-R16b1 and 4.4.2b1 posted before the holidays. We expect to finalize these versions in early 2020.\nThanks to our external reporters and contributors\nJay Doran from Bluecat Networks suggested we add a ping-timeout knob. Jinmei Tatuya at Infoblox reported an issue which caused DDNS updates to be carried out over TCP rather than UDP. Fernando Soto from BlueCat Networks reported a dual-stack mixed-mode issue. Peter Nagy at Porion-Digital reported an issue that was causing the server, when running in DHPCv4 mode, to segfault when class lease limits are reached. He also submitted a patch. Brett Neumeier reported a minor issue with warnings when compiled with GCC 9. Tommy Smith updated the URL of IEEE oui.txt in contrib/dhcp-lease-list.pl. Joe LeVeque reported a problem with the define flags when using SO_BINDTODEVICE. Brad Smith at OpenBSD alerted us to a patch from OpenBSD to always set the scope id of outbound DHPCv6 packets (which only applies when compiling under OpenBSD). Jay Doran at BlueCat Networks reported that the \u0026ldquo;d\u0026rdquo; domain name option format was incorrectly handled as text instead of RFC 1035 wire format. Charles Hedrick pointed out a syntax error in ldap.c which cropped up under Ubuntu 18.04.1/gcc 7.4.0. CVEs: We did not announce any ISC DHCP security vulnerabilities in 2019.\nTechnical Support Customers\nOur support business was strong in 2019. We gained 24 new accounts and only 9 left us. All the rest renewed, for a total of 122 at the end of the year. We added a new support engineer, Peter Davies, who has several decades of experience with both BIND 9 and DHCP. Although he lives in Copenhagen now, Peter\u0026rsquo;s first language is Welsh, bringing ISC\u0026rsquo;s total number of native languages spoken to eleven (English [American, British, AND Australian], Polish, Czech, French, Romanian, Welsh, German, Dutch, and Portuguese).\nNearly half of our customers opt for the Silver support level, which includes our premium software and 7x24 response to critical issues. Many open source users tell us that they need the option of premium software to justify paying for technical support.\nBIND 9 support is our most popular product, but a number of our customers opt for support for multiple open source systems. The strongest trend we saw in 2019 was an uptick in support contracts for the Kea DHCP server.\nMost of our customers are Internet service providers, carriers, or large enterprises.\nSupport Issues and Trends\nWe saw a number of significant but intermittent issues with large zone updates (usually for RPZ), impacting BIND 9 performance. We received several requests for more metrics and performance improvements for managing ongoing DNSSEC signing and zone propagation, especially for large zones. DoH, DoT, and BIND 9 feature availability were asked about frequently. There were many requests for EDNS Client Subnet Identifier, often from users interested in using the feature creatively to enable some sort of customized response. Questions about Kea high-availability mode, and how to test and monitor that the function is working when there is no failure, became more prevalent. Many new users are looking for advice on Kea operations and asking about best practices for scaling and provisioning. As usual a number of queries were from users interested in optimizing their performance, whether for Kea or BIND 9. Farsight released their RPZ Services module, which we were able to offer to BIND -S edition customers. This is a plug-in for BIND that works with the BIND RPZ Service interface. We have gotten some new customers for ISC DHCP software, and are satisfied to continue maintaining it for the foreseeable future. Although ISC and others promoted it widely via conferences and blog posts, the 2019 Flag Day still took some of our users and customers by surprise. We resolved to do a better job of communicating the 2020 Flag Day via direct mail to ISC support customers and other means. We restarted our Support customer newsletter, sending a quarterly email to our non-OEM support accounts. We hope this will strike the right balance between too much information and not enough, and will provide us with a vehicle for communicating about longer lead-time events and changes. Talks\nSam DeFriez from Community Fibre, a Kea support customer, gave a presentation at UKNOF about his experience in deploying Kea DHCPv6.\nISC staff gave 15 DNS technical talks, including five how-to webinars which were recorded and are available for playback. Slides, and recordings if available, are linked on the Presentations page of our website.\nTopics included:\nEncrypted DNS: DoH vs. DoT BIND Logging: Content Out of Chaos UNIX Command-Line Basics DNS Flag Day 2020 Hyper-hyper Local Root Serving DNSKey Flood - What Does it Tell Us About Resolvers? A Story on Unsupported DNSSEC Algorithms DNSSEC: Your Internet Infrastructure Needs Better Protection Making a Modern DNS Server Stories from BIND 9 Refactoring Migration from ISC DHCP to Kea Using the Kea Configuration Backend F-Root \u0026amp; Technical Operations We continued to upgrade our network of F-Root Anycast instances and also installed new nodes at Harvard in Boston (US), New York (US), La Paz (Bolivia), Hangzhou (China), and Yogyakarta (Indonesia). When combined with the nodes operated in partnership with Cloudflare, our nodes now number in excess of 250. We consistently appear at or very near the top of the performance charts based on query latency, as reported by dnsperf.com.\nWe started a project to perform a complete refresh and modernization of the F-Root Operations and Management infrastructure.\nWe announced the shutdown of the SNS-Public Benefit service, to take effect on 31 January 2020. There are now many other organizations offering low-cost or free DNS secondary services to worthy non-profits, as we concentrate our resources on F-Root.\nWe were previously providing Internet connectivity services to the City of Palo Alto and various other municipal sites in the Bay Area of California. This service has also now been discontinued.\nWe joined the NLnog Ring which gives us access to 500+ systems from which we can run network tests, in return for us provisioning a virtual machine on our own infrastructure for use by other ring members.\nWe took advantage of a Fastly offering for non-profits; we now host https://downloads.isc.org and https://www.isc.org on the Fastly network, to offer the best performance possible to our users worldwide.\n","permalink":"/blogs/2019-in-review/","tags":[2019],"title":"2019 In Review"},{"categories":["BIND","Open Source","Webinars","Events"],"contents":"BIND 9 and DNSSEC BIND 9 has a complete implementation of the DNSSEC protocols, and DNSSEC validation is enabled by default in the newest versions of BIND. If you are a BIND administrator and are NOT using DNSSEC today because you are not sure how to set it up or maintain it, this webinar series is for you.\nThis series of 10 lectures will teach you everything you need to know to implement DNSSEC on both Authoritative and Resolver systems using BIND. Each lecture will be approximately an hour long. The live webinars will be on Wednesdays at the same time each week (18:00 UTC), beginning February 5th and continuing through April 15th, with no session on March 25th. Register once for the series, and attend when you can: recordings will be posted of each session so you can catch up on any you miss. And did we mention it\u0026rsquo;s free?\nSession topics:\nWhy Do We Need DNSSEC? - February 5, 2020 DNSSEC Development \u0026amp; Deployment (History) - February 12, 2020 Cryptography and DNS Background for DNSSEC - February 19, 2020 DNSSEC Resolution and Validation - February 26, 2020 DNSSEC Record Types, Keys, Signatures, and NSEC - March 4, 2020 DNSSEC Record Types, Keys, Signatures, and NSEC, Part 2 - March 11, 2020 DNSSEC Signing with BIND - March 18, 2020 Key Rollover and Algorithm Rollover - April 1, 2020 (no fooling! no webinar March 25) Key Maintenance With the New KASP Tool - April 8, 2020 DNSSEC Troubleshooting - April 15, 2020 Speaker Information: Alan has over 20 years experience in UNIX and Linux systems administration, specializing in BIND and ISC DHCP for the past 13 years. Alan has been a Support Engineer at ISC, a Senior Technical Trainer for InfoBlox, and a Migration Architect for BlueCat. Currently Alan is a Systems Engineer at ISC.\nNote: The BIND 9 DNSSEC Guide may also be useful for anyone learning about DNSSEC. This guide is now published as an appendix to the BIND 9 Administrative Reference Manual.\n","permalink":"/blogs/dnssec-webinar-series-2020/","tags":["BIND","DNSSEC","Webinars","Events","Training"],"title":"BIND 9 and DNSSEC - Webinar Series"},{"categories":["BIND","DNS","DHCP","Open Source","Company News"],"contents":"Once again, it\u0026rsquo;s time to look back over the last year and tell our customers, users, and supporters how much we appreciate them. We thank you for using and contributing to ISC\u0026rsquo;s open source software and wish you all the best for a happy and healthy year in 2020!\n","permalink":"/blogs/2019-happy-holidays/","tags":["holidays","2019","ISC"],"title":"Happy holidays from ISC!"},{"categories":["BIND","Open Source","Webinars","Events"],"contents":"We were delighted to see so much interest in our December 2019 webinar on encrypted DNS! Unfortunately, due to a bit of poor planning on our part, we were unable to accommodate everyone who signed up to attend that day, so we are offering another live webinar on the same topic on January 15, 2020 at 10 AM PST/1 PM EST/6 PM UTC. We sincerely apologize to anyone who was unable to participate in the December session but promise that you will not be disappointed by the January version.\nOnce again, this webinar will discuss the differences between DNS, DoH, and DoT as well as evaluate the current environment, dive into an actual implementation, and examine how to debug BIND systems with encrypted DNS enabled. Participants will be able to ask questions.\nThe slides and a recording of the webinar will be available, as always, on our Presentations page once the webinar is complete.\nWe invite all BIND administrators and any other interested users to register for the January webinar at https://zoom.us/webinar/register/WN_t6mNTVLOT_GwfVT08INBCw.\n","permalink":"/blogs/doh-webinar-january-2020/","tags":["BIND","DNS over HTTPS","DoH","DNS over TLS","DoT","Webinars"],"title":"Encrypted DNS: DoH vs. DoT Webinar - January 15, 2020"},{"categories":["BIND","Open Source"],"contents":"Two years ago, interest in DNS Encryption was lukewarm\u0026hellip; In May of 2018, ISC did a survey asking our users about their interest in deploying various DNS privacy measures, including both QNAME minimization and encryption (DNS over HTTP or DoH and DNS over TLS or DoT). At that time, although 29% of enterprise respondents were very interested in offering an encrypted DNS solution to their users, almost 75% stated they did NOT want to encourage users to use a free, public hosted DNS that implemented DNS privacy features. Among service providers, although just over half were very interested in offering encrypted DNS, a third of them cited LACK OF DEMAND as a significant obstacle to implementing it. (We are updating the survey now; click here to participate)\nDoH has exploded with rapid development and deployment Eighteen months later, the landscape is dramatically different. We now have a handful of browsers that implement encrypted DNS using DoH, many open, hosted DoH services, and there is a lot of pressure on enterprises and service providers to figure out what to do about it. Meanwhile, considerably less progress has been made in DNS over TLS, the main competitor to DoH. Google is implementing DoT support in the Android operating system, and there are some open source DoT resolvers. We don’t yet have stub resolvers that implement encrypted DNS in the desktop operating systems; the rapid drive towards encryption has been mostly in the browsers, not the operating systems. (Systemd apparently has DoT support, but it isn\u0026rsquo;t clear whether it works yet.)\nWhat are the differences between DoH, DoT and ‘regular DNS’? If you are confused about how DoH compares to DoT and to ‘regular’ DNS, you are not alone. This blog explains DoH and DoT in relation to actual privacy threats, which is critical for understanding what each does. At the kindergarten level, however, there are 4 main differences between DoH and ‘regular DNS’ illustrated below:\nRegular DNS (and DoT) are implemented in the operating system, and support all applications that need to connect over the Internet (including, for example, mail servers). DoH is typically implemented in the browser, in support of http only, although the operating system can also be configured with a proxy to direct all DNS over http. Regular DNS queries and responses are sent in cleartext, and may be read by someone intercepting them. DoH (and DoT) traffic is encrypted. This encryption provides protection against anyone wishing to read the contents of the DNS lookups, but note that there are still ways to discover what sites are being visited, despite encryption. Regular DNS responses typically are filtered through a DNS firewall that implements a ‘blacklist’ of sorts for zones with poor reputation for malware or abuse. This mechanism has evolved over a number of years, and there is no equivalent available for DoH at the moment. Regular DNS is typically provided by whomever administers the local network. The DoH resolver is currently configured into the browser, with some browsers coming pre-configured for cloud-based services. In addition, although \u0026lsquo;Regular DNS\u0026rsquo; has been proven to be very scaleable and reasonably low-latency, the scalability and latency of encrypted DNS is still being investigated. Regular DNS uses UDP (unreliable datagram protocol), which enables the application to accept packets in any order. Encrypted DNS uses TCP (transmission control protocol) which effectively establishes virtual connections with the server, using some finite server resources for each connection.\nWho is advocating for DoH? Privacy advocates are in general in favor of either DoT or DoH, because both protect against someone intercepting the DNS messages between the client and resolver. DoH is kind of the \u0026lsquo;web\u0026rsquo; approach to DNS, so it is favored by people most comfortable with web servers. Google, Mozilla, Cloudflare and others are pushing fairly hard for users to switch from \u0026lsquo;regular\u0026rsquo; DNS to DoH.\nThe impetus for DoH has not come from enterprises or from network access service providers; for the most part they are scrambling to catch up and figure out how to respond. Enterprises and service providers who offer DNS services today typically have network security features that depend on DNS, and no equivalent solutions for DoH. Because some browsers are threatening to enable DoH by default, perhaps offering the end user a hard-to-find or discouraging interface for opting out, enterprises are scrambling to figure out how to block DoH, and service providers feel pressured to deploy some DoH service option for parity with the hosted services providers.\nWho decides which resolver your device queries? There is an argument that DoH puts the end-user in charge of who will see their DNS data. In fact, the user can be in charge of where their DNS data goes with either traditional or encrypted DNS, but it is a question of what is the DEFAULT behavior. Typically, with classic DNS, the network operator (enterprise or service provider) establishes the network DNS service and the end user has to have some sophistication to override that system-level network setting and choose another DNS service. With DoH, the Browser selects the DNS resolver, and again, the end user has to have some sophistication to override that browser setting. The Firefox implementation includes pre-configured, centralized DoH providers, selected by Mozilla. While the end user can change this configuration, we all know that defaults are often left untouched. This means that by merely applying a browser update, the end user would unknowingly be switching DNS providers. (By contrast, Google has announced that if there is a network DNS server Chrome will not override that.) The proposed change in how the DNS service is provisioned and who controls that caused a lot of controversy. Advocates disagree strongly about whom to trust to operate the resolver at other end of the DNS connection. Browser-based DNS is clearly a way to take control from the user\u0026rsquo;s contracted, regulated, network operator and transfer it to a web-based service provider. If the end user is selecting this web-based provider, and trusts them, there is no problem, but if Mozilla or another browser vendor selects the provider, the end user might not have any relationship with, or trust in, their new provider. In fact, Mozilla has stated directly that their intention is to transfer control over DNS data from ISPs to their partners.\n\u0026ldquo;We are trying to essentially shift the power to collect and monetize peoples\u0026rsquo; data away from ISPs and providing users with control and a set of default protections,\u0026rdquo; he added, regarding Mozilla\u0026rsquo;s changes.\nMarshall Erwin, senior director of trust and safety at Mozilla, as reported in VICE https://www.vice.com/en_us/article/9kembz/comcast-lobbying-against-doh-dns-over-https-encryption-browsing-data Increased privacy is obviously good The argument that a fast, forced migration to DoH is for the benefit of end users is hard to refute without seeming to argue against end-user privacy on the Internet, which is like arguing against motherhood. The problem is that while using DoH to encrypt from the browser to the cloud DoH provider will provide some additional privacy to the end user, overall, it does not seem to be in the best interests of anyone but the browser vendors and cloud providers to prefer DoH over DoT. Certainly, it is not in the interests of enterprises, whose top priority is ensuring network security and protecting the users and systems within their firewalls from malware and viruses. ISPs have legal obligations to block certain kinds of content, mandated by their regulators. In both of these cases, this blocking is implemented using DNS filtering that will be completely ineffective if users switch to DoH. Both enterprises and service providers who provide high-quality, large scale DNS resolution to their users today are also concerned about the scalability, performance and manageability of DoH. DoH runs over TCP rather than UDP: the scalability and resilience of UDP is well-established, while TCP can be considerably more resource intensive and sensitive.\nWhat about the end users, though? Aren’t encryption and privacy in everyone\u0026rsquo;s best interests? Of course, all things being equal, increased privacy is desirable. However, with DoH, other things AREN’T equal. There are increased security risks for users using DoH, because they are no longer protected by DNS filtering - and there are other, new security threats we are just discovering that are enabled by DoH. Some users who really need privacy for their own personal safety may be duped into feeling protected when using DoH, when actually using a VPN would provide much better privacy. DoH encrypts only the DNS lookup, whereas a VPN would protect all communications.\nDeployment of DoH may contribute to increased centralization on the Internet The biggest underlying concern about DoH, however, is the way the browser implementations can contribute to increased centralization of control on the Internet. This is because DoH makes it more likely that a key control point, the DNS, will move into the \u0026lsquo;cloud\u0026rsquo; from the access network. This is a political and business issue, not a technical one. The Internet has thrived in part because of intense, open competition for better and more attractive services and applications. This intense competition is due to the openness of the Internet, its lack of regulation, and the lack of centralized control. There is a relatively low barrier to entering the markets for software and services on the Internet. As long as you adhere to some technical standards to ensure interoperability, you can innovate.\nExcessive concentration of control is obviously BAD As a few enormous companies control more resources and critical services on the Internet, they are becoming, effectively, dictators. For example, it is difficult to have a successful website today without adhering to Google’s guidelines, because if you don’t, your site won’t be ‘findable’ via Google search. It is difficult to criticize the business motive to grow and provide a very appealing service, even though the end result may have a monopolistic impact. Even though you may regard these \u0026rsquo;technology dictators\u0026rsquo; as benevolent, that doesn\u0026rsquo;t change the fact that they have excessive control.\nLarge scale web hosting is a natural monopoly A lot of network services are natural monopolies - there are such advantages that come with massive scale that it is very hard for any new entrant to compete effectively with a larger, established system. This is why common carriers are generally regulated industries, because they are natural monopolies in the geographies they serve. However, the big web-hosting providers are also natural monopolies, and they are unregulated. Enterprise networks and service provider access networks are actually becoming natural bastions of local control, slowing the inexorable increase of centralization on the Internet. Widespread deployment of DoH will weaken those independent network domains and further contribute to the hegemony of CDNs and centralized hosted service providers. While there may be some efficiencies and optimizations possible with greater centralization, history and experience has shown again and again that eventually we will regret relinquishing control to a few unregulated, self-interested giant tech companies. Eventually, this loss of independence and self-determination will lead to less freedom of choice, less competition, and less privacy and security.\nWhy is ISC implementing DoH? So, if we are so skeptical of DoH in its current state, why has ISC committed to implementing DoH in BIND 9? The development of encrypted DNS is something we welcome, in general. We are also not in the business of policing the Internet. We have tried, in a few cases, to police what we regard as anti-social behavior by refusing to tolerate some violations of Internet technical standards. However, DoH is being standardized in the IETF, and we want BIND 9 to be a reference implementation of the standards. ISC exists to help users of our software make the most of critical Internet services. Some of our users now want to at least experiment with DoH. There are efforts underway to fix the biggest problem with DoH, which is the lack of mechanisms to configure the client. Also, Mozilla, one of the big proponents of DoH and user privacy in general, has graciously offered to underwrite the development of DoH in BIND 9 so our users and support customers don’t have to fund it. It is possible that DoH will turn out to be a good thing in the long run. In any case, we are going to enable people to try it, to experiment with it, and to evaluate the likely impact.\nWhat can you do today? For accurate technical information on the characteristics and status of both DoH and DoT, we recommend the dnsprivacy.org web site. If you are an enterprise concerned about users bypassing your network security, you probably need to develop a policy, and you may wish to attempt to block DoH. Mozilla is helpfully providing a mechanism to enable network administrators to block Firefox users from using DoH if there is an incumbent DNS resolver.\nMost ISPs and enterprises that want to be on the leading edge of DNS privacy might want to start experimenting with encrypted DNS and offering DoH alongside regular DNS. It is possible today to deploy encrypted services today using BIND 9 with an HTTP proxy, such as NGINX, HAproxy, or a DNS-specific proxy like this one from the author of DNScrypt or DNSDist from PowerDNS. ISC is committed to producing versions of BIND 9 in 2020 with the proxy function better integrated to facilitate a quality service deployment. We will also, at the same time, work on DNS over TLS, which is a competing DNS encryption method that does not transfer control of the DNS to the big web server providers. DNS over TLS is evolving much more slowly, however, because the major desktop operating systems don’t yet support it. There is a group of implementers forming to exchange information about how to deploy DNS encryption, and researchers are starting to publish studies on the impact of encryption on DNS latency and scalability.\nTo learn more about how DoH compares to DoT, or how to deploy or block DoH today, listen to our webinar on the topic, scheduled for December 11, 2019.\nRecommended reading on encrypted DNS Here are links to some essential reading on encrypted DNS\n[Encrypted DNS dot org](https://www.encrypted-dns.org) - and check out the [active mailing list archives](http://bit.ly/EDDI-List-Archives) [DNS Privacy dot org](https://dnsprivacy.org/wiki/) - https://dnsprivacy.org/wiki/ is an excellent, authoritative resource on the different DNS privacy solutions and how they compare. [M3AAWG tutorial on encrypted DNS](http://www.m3aawg.org/dns-crypto-tutorial) - www.m3aawg.org/dns-crypto-tutorial provides detailed instructions on how to deploy encrypted DNS. [Excellent summary of one enterprise's choice wrt the DoH/Firefox controversy](https://www.dns.cam.ac.uk/news/2019-09-19-dont-use-application-dns.html), as of September 2019 - from Tony Finch of Cambridge University, https://www.dns.cam.ac.uk/news/2019-09-19-dont-use-application-dns.html [Sam Knows on DoH performance](https://www.samknows.com/blog/dns-over-https-performance) - https://www.samknows.com/blog/dns-over-https-performance [List of publicly-accessible DoH servers](https://github.com/curl/curl/wiki/DNS-over-HTTPS) actively maintained by the community - https://github.com/curl/curl/wiki/DNS-over-HTTPS [Mozilla canary domain](https://support.mozilla.org/en-US/kb/canary-domain-use-application-dnsnet) - a tool for blocking DoH, https://support.mozilla.org/en-US/kb/canary-domain-use-application-dnsnet [Centralized DoH is bad for privacy](https://labs.ripe.net/Members/bert_hubert/centralised-doh-is-bad-for-privacy-in-2019-and-beyond) - RIPE NCC Blog by Bert Hubert, https://labs.ripe.net/Members/bert_hubert/centralised-doh-is-bad-for-privacy-in-2019-and-beyond Is New always better? [UDP vs Doh](https://blog.apnic.net/2019/12/06/is-new-always-better-udp-vs-doh) - APNIC blog from University of London, https://blog.apnic.net/2019/12/06/is-new-always-better-udp-vs-doh/ [DoH: Anti-competitive and Network Neutrality Aspects](https://blog.powerdns.com/2019/12/03/doh-anti-competitive-and-network-neutrality-aspects/) - PowerDNS Blog by Bert Hubert, https://blog.powerdns.com/2019/12/03/doh-anti-competitive-and-network-neutrality-aspects/ Blog from Bluecat Networks \"[DNS over HTTPS: What, Why, \u0026 Who Cares](https://www.bluecatnetworks.com/blog/dns-over-https-what-why-who-cares/)\" [DNS Security: Threat Modeling DNSSEC, DoT, and DoH](https://www.netmeister.org/blog/doh-dot-dnssec.html) - https://www.netmeister.org/blog/doh-dot-dnssec.html [RFC 8484 (DoH)](https://tools.ietf.org/html/rfc8484) \u0026 [RFC 8310 (DoT)](https://tools.ietf.org/html/rfc8310) - the IETF technical standards documents [Signaling That an Authoritative DNS server offers DoT](https://datatracker.ietf.org/doc/draft-levine-dprive-signal/) (new proposal that would help networks deploy DoH alongside ‘regular’ DNS) ","permalink":"/blogs/doh-encrypted-dns/","tags":["BIND","DNS over HTTPS","DoH","DNS over TLS","DoT"],"title":"Encrypted DNS: Why all the drama about DOH?"},{"categories":["BIND","Open Source","Webinars"],"contents":"DNS over HTTPS (DoH) and DNS over TLS (DoT) are hot topics in the DNS world today, as developers and end-users become more and more focused on data privacy. It\u0026rsquo;s a very complex issue with lots of valid concerns on both sides, but it\u0026rsquo;s fair to say that DoH and DoT will not be going away anytime soon.\nBut what does that mean for the Internet, and specifically for users of BIND 9, ISC\u0026rsquo;s DNS software? We invite everyone to join us for a webinar to be held on Wednesday, December 11, 2019 at 1 PM ET (10 AM PT, 1800 UTC), where we will cover some of the main topics related to DoH and DoT. We will try to cut through some of the hyperbole you may be hearing and focus on the essential information you need to know, including a summary of the concerns around DoH, a comparison of DoT and DoH, and a status update on the development and deployment of both.\nPlease register at https://zoom.us/webinar/register/WN_ckWHOFgSQha7lwKhKbdpOg and we look forward to seeing you there.\nUPDATE December 12, 2019: The slides and a recording of the December 11, 2019 webinar are now available at our Presentations page. Due to the unprecedented interest in this webinar, we will be presenting it again live on January 15, 2020: please register at https://zoom.us/webinar/register/WN_t6mNTVLOT_GwfVT08INBCw.\n","permalink":"/blogs/doh-webinar/","tags":["BIND","DNS over HTTPS","DoH","DNS over TLS","DoT","Webinars"],"title":"Encrypted DNS: DoH vs. DoT Webinar - December 11, 2019"},{"categories":["ISC","Company News"],"contents":"We thought our community might enjoy getting to know a little bit about some of the people behind the software you use every day, so we will be presenting short profiles of some of our engineers and other staff members. We hope you find them as interesting as we do!\nMichał Kępień of Warsaw, Poland, currently holds the titles of BIND 9 Quality Assurance Manager, Software Developer, and CLO (Chief Latte Officer) at ISC. He has an MSc in Computer Science from the Military University of Technology in Warsaw, and before coming to ISC in June of 2017 he worked as a network/system administrator at a small IT company called Ce3 and then as a DNS engineer at NASK, the registry for .pl (BIND support customers - shout out!).\nAs QA Manager, Michał is constantly improving our test processes, recently adding Windows FreeBSD and OpenBSD to BIND\u0026rsquo;s continuous integration testing. BIND users have Michał to thank for the ISC CentOS packages - he created the original packages on our local servers and curates the new subscriber repo on Cloudsmith.io. Among the BIND developers, Michał is known as a thorough, detail-oriented code reviewer. You don\u0026rsquo;t often get a review like \u0026ldquo;lgtm\u0026rdquo; from Michał; you are more likely to get several paragraphs of comments and requests for clarification. Michal was the primary developer of BIND\u0026rsquo;s Mirror Zones feature, which allows a resolver to download, verify, and use the root zone locally.\nMichał believes in ISC\u0026rsquo;s mission and is a fan of the open source concept. As he says, \u0026ldquo;I like making things work reliably and I believe that putting your code under public scrutiny is the best way to improve its quality. The more people use a given piece of software and/or try to break it, the more corner-case bugs can be identified and eliminated - and sharing the source code publicly facilitates that.\n\u0026ldquo;On the other hand, as a user of open source, I appreciate the fact that even if a piece of software does not work as I would expect it to, I can try tweaking it myself. Sometimes, this results in a patch being sent upstream. It is a great feeling to see your patch accepted and knowing that the code you contributed to is running on someone else\u0026rsquo;s machine.\u0026rdquo;\nAt ISC, Michał loves \u0026ldquo;working on something that is indirectly used by millions of people all over the world and having the privilege of doing that in the company of really smart people who are happy to share their knowledge and insights.\u0026rdquo; Because there are now four Michal/Michał/Michaels at ISC, Michał volunteered to be known as \u0026ldquo;Mr. White\u0026rdquo; to reduce confusion.\nMichał works hard every day to make ISC\u0026rsquo;s products better, to the best of his ability. He also regularly delights the rest of the staff (and his family) with his attempts to make latte art.\nThank you for using ISC\u0026rsquo;s software and we hope you have enjoyed this peek behind the ISC curtain. We welcome your feedback at marketing@isc.org and look forward to bringing you more employee profiles in the future.\n","permalink":"/blogs/isc-profile-kepien/","tags":["BIND","Profiles"],"title":"Meet an ISC Engineer - Michał Kępień!"},{"categories":["BIND","Open Source"],"contents":"The Mozilla Open Source Software award will underwrite the cost of adding DNS over HTTPS to BIND 9 to make this new technology more widely available.\nUnder the terms of the award, a DoH implementation will be included in the free open source BIND 9 code, released under the MPL 2.0 license.\nWhat is DoH? DNS over HTTPS is a new RFC (https://tools.ietf.org/html/rfc8484) from the IETF, published in October 2018, that gives web applications access to DNS information via existing browser APIs. ISC has been always committed to implementing DNS standards to give DNS users and operators a choice about where, how, and when they want to deploy them. It is part of our mission to offer the DNS ecosystem more choices.\nDoH and DoT DoH and another proposed standard, DNS over TLS (aka DOT) each provide some protection from on-the-wire snooping of DNS information. ISC has announced it will develop support for both DoH and DoT in 2020, so that operators and researchers can evaluate and assess both of these new encrypted transport options.\nTimeline The MOSS award to ISC spans a 12-month development period. The first milestone is completion of some refactoring in BIND 9 to modernize the networking stack, which will support the development of alternative transport for DNS. ISC will publish a DoH design document by the end of April 2020, in the public BIND 9 development wiki. Then, by the end of June, ISC will publish code for a client implementation (e.g. in the dig and delv utilities). The complete implementation will be available in the master development branch in October 2020. Finally, ISC will backport DoH to the long term support branch, 9.16, before the end of 2020 to provide widespread access to the new feature.\n","permalink":"/blogs/moss-doh-award/","tags":["BIND","Open Source","Donations"],"title":"ISC receives Mozilla Award for Development of Foundational Technology"},{"categories":["BIND","Open Source","Webinars"],"contents":"For the third in our series of monthly webinars (see our Presentations page for recordings and slides from the previous two), Alan Clegg of ISC will speak about BIND logging on Wednesday, October 30, 2019, at 1 PM ET. Click here to register! And hopefully we\u0026rsquo;ll even have a little pre-Halloween fun.\nAre you hearing the moans and shrieks of queries past? Are dead connections haunting you? Learn how to set up BIND logging so that you can investigate these mysteries safely before they lead to even greater horrors.\nAlan will teach you how to optimize the logging in BIND. He\u0026rsquo;ll explain how to get the information that you need in a place that you can find it and make sure that important information isn\u0026rsquo;t lost, while keeping the size of log files as small as possible. If you are a BIND administrator, you won\u0026rsquo;t want to miss it!\nSpeaker Information: Alan has over 20 years experience in UNIX and Linux systems administration, specializing in BIND and ISC DHCP for the past 13 years. Alan has been a Support Engineer at ISC, a Senior Technical Trainer for InfoBlox, and a Migration Architect for BlueCat. Currently Alan is a Systems Engineer at ISC.\n","permalink":"/blogs/bind-logging-webinar/","tags":["BIND","Logging","Webinars"],"title":"BIND Logging Webinar - October 30, 2019"},{"categories":null,"contents":"ISC maintained SNS (secondary name service) for many organizations and individuals over the years. As ISC has shifted its focus more towards developing and supporting open source software and operating a root server, these SNS operations are no longer core to our business. We cannot justify continuing to support them, especially considering the many alternative services available.\nISC SNS was shut down on January 31, 2020. ","permalink":"/sns-pb/","tags":null,"title":"Secondary Name Services"},{"categories":null,"contents":"The Domain Survey has been discontinued. The survey collected information about IPv4 only, and we felt it was increasingly misleading for that reason. The historical summary data are available below. The Domain Survey attempted to discover every host on the Internet by doing a complete search of the allocated address space and following links to domain names. It was sponsored by Internet Systems Consortium, Inc. with technical operations subcontracted to 3waylabs. Survey data were collected and published quarterly.\nThe most recent survey available online is dated January 2019. Data for January 2018 are flawed and are included here for amusement purposes only.\nPast Results Jan 18\nJan 17\nJan 16\nJul 15 | Jan 15\nJul 14 | Jan 14\nJul 13 | Jan 13\nJul 12 | Jan 12\nJul 11 | Jan 11\nJul 10 | Jan 10\nJul 09 | Jan 09\nJul 08 | Jan 08\nJul 07 | Jan 07\nJul 06 | Jan 06\nJul 05 | Jan 05\nJul 04 | Jan 04\nJul 03 | Jan 03\nJul 02 | Jan 02\nJul 01 | Jan 01\nJul 00 | Jan 00\nJul 99 | Jan 99\nJul 98 | Jan 98\nOld Survey Results Jul 97 | Jan 97\nJul 96 | Jan 96\nJul 95 | Jan 95\nLinks and Information About the Domain Survey\nFAQs about the ISC Domain Survey and definitions used\n","permalink":"/survey/","tags":null,"title":"Internet Domain Survey"},{"categories":["Kea","BIND","DHCP","Open Source"],"contents":"At ISC we are updating our security vulnerability disclosure policy.\nSecurity and transparency remain the primary goals of our vulnerability handling procedures but we would like to make some adjustments to better serve our customers and reduce the impact of frequent out-of-cycle security releases.\nWhy are we making changes?\nOur current vulnerability disclosure policy has been in place for more than six years. During that time we have had an opportunity to evaluate the policy and its application to a variety of real-world vulnerabilities identified in ISC software products and after each disclosure we have taken the opportunity to examine our handling of the incident. Based on our experience we think we can improve the process to schedule security fixes more predictably, avoid unnecessary alarms, and still ensure that those security vulnerabilities that constitute a significant risk for our users get patched and disclosed in a timely manner.\nWhat changes are we making?\nThe most significant change will be to change the threshold CVSS score at which our policy requires us to go through our full disclosure process. As it is currently written, the policy states that we will have a disclosure for any vulnerability that scores as \u0026ldquo;High\u0026rdquo; or \u0026ldquo;Critical\u0026rdquo; (or \u0026ldquo;Critical/Catastrophic\u0026rdquo;, a category that no longer even exists) but the policy also states a threshold CVSS value of 5.0, which was initially chosen based on a previous version of the CVSS scoring system. In CVSS 3, which we have used since 2017, the \u0026ldquo;High\u0026rdquo; and \u0026ldquo;Critical\u0026rdquo; severity categories begin at a score of 7.0, so we propose to change the threshold value to align with the categories used by the current scoring system. It is important to note that this threshold value is used by ISC to determine when a phased disclosure process is mandatory. We retain the option to issue a Security Advisory or Operational Notification for issues which score below this threshold if, in our judgement, the impact on operators would make such an action advisable. We also plan to begin using the Temporal and Environmental metrics introduced in CVSS 3.0. These attempt to improve the accuracy of vulnerability scoring by allowing factors to be taken into account which are not considered in the Base Metrics. Our previous policy of not using these metric groups meant that adjustments due to these factors were set at the most pessimistic levels even if taking all available information into account would have resulted in a lower score (for example: a vulnerability which has an effective configuration-time workaround and does not require an immediate software update to prevent exploitation is eligible for a lower CVSS score, but only if one is using the optional Temporal Metrics when scoring.) Finally, since we are making public changes to our disclosure practices, we would like to take this opportunity to also introduce a policy change requested by our customers. Beginning this year, to the extent possible we will avoid scheduling disclosure of non-emergency security issues during the period beginning November 1 and ending December 31. Based on feedback we have received, many of our customers have special configuration and change freeze requirements for this period, which can be an especially important one for retailers and some other types of business. We therefore propose that if a non-public vulnerability is discovered by us or reported to us and we are able to do so, that disclosure of such issues be postponed until after January 1 in order not to interfere with the end-of-year retail and holiday season. If you want to read the new policy you can find it in our Knowledge Base \u0026ndash; ISC Security Vulnerability Disclosure Policy\n","permalink":"/blogs/vulnerability-policyupdate/","tags":["Kea","BIND","DHCP","Open Source"],"title":"ISC Security Vulnerability Policy Updated"},{"categories":["Kea","Open Source"],"contents":"We are pleased to introduce the latest version of the Kea DHCP server. This release adds a major new feature, which enables an entirely new provisioning and deployment model.\nConfiguration Backend The Kea Configuration Backend (abbreviated as config backend or CB) lets you manage multiple DHCPv4 and DHCPv6 server configurations from a MySQL database. You can store most of the Kea configuration in the CB: global parameters, shared networks, subnets, pools, options, and option definitions. Instead of specifying these parameters in your local configuration, you can now put this information in the database and simply tell Kea where to find it. Kea will merge the parameters from the CB with other parameters configured locally (such as interfaces), and can periodically poll the CB and retrieve updates. This will greatly simplify automated configuration change deployments.\nRecorded webinar introducing the Kea configuration backend Kea 1.6.0 documentation Backends Available With this change, you can now choose to store leases, host reservations, and configurations in a database backend. MySQL, PostGreSQL, and Cassandra are all supported for lease storage; MySQL and PostGreSQL are supported for host reservations; and MySQL is supported for the configuration backend.\nNew Hooks Library Provides Management for the Config Backend The CB stores data in a MySQL schema that is public. It\u0026rsquo;s possible to insert a configuration into the MySQL tables manually or automatically using SQL scripts, although this requires a reasonably good knowledge of the schema. As an alternative, ISC has developed a new hooks library that provides management commands for config backends. It simplifies many typical operations, such as listing, adding, retrieving, and deleting of global parameters, shared networks, subnets, pools, options, and option definitions. For a complete list, see commands starting with \u0026ldquo;remote-\u0026rdquo; in Appendix A of the Kea Administrator Reference Manual. The cb_cmds hooks library is available to ISC support subscribers only. If you are not a subscriber and would like access to the cb_cmds hooks library, please contact info@isc.org and our sales team will be happy to assist you.\nNew Features Requested by Users In addition to the configuration backend, we have also added a number of other new features requested by users. See the Release Notes for full details. Here are a few highlights:\nDROP class. Kea offers a powerful client classification mechanism to distinguish between various types of devices, and you now can use this tool to drop unwanted traffic. You simply need to define a class with a special name called DROP; if an incoming packet is assigned to this class, the packet is dropped immediately #606. max-lease-time, min-lease-time. Until now, Kea allowed administrators to configure a specific, fixed value for lease lifetimes. Two new optional parameters - max-lease-time and min-lease-time - allow you to expand this to a range. Both the DHCPv4 and DHCPv6 protocols allow clients to send hints, and these new parameters let Kea honor those hints #295. Improved compatibility. Kea follows RFC standards as defined by the IETF; however, there are some devices out there that are not as strict in following the standards. Nevertheless, Kea should be able to work with such devices. Kea 1.6.0 now sends the DHCPv4 message type as the first option #530, the trailing null is now stripped from received options #539, empty hostnames are handled better (#40), and it\u0026rsquo;s now possible to define sub-options with codes 0 and 255 #564. Control socket for DDNS. The DDNS daemon has finally gotten its control socket and can process commands. It\u0026rsquo;s now possible to update its configuration and in general interact with it while the daemon is running. The following nine commands are now supported: build-report, config-get, config-reload, config-set, config-test, config-write, list-commands, shutdown, and version-get #30. ISC Kea Packages Kea is adding integration with other open source products, including NETCONF (provided by Sysrepo), RADIUS (based on the FreeRADIUS client), and Cassandra. That flexibility comes at a price, as it is getting more difficult to install Kea with those optional dependencies enabled. We also noticed that some OS distributions lag behind in packaging the latest Kea releases. To help alleviate both of those problems, ISC has begun providing our own native (DEB and RPM) packages for several popular distributions. We are providing packages for current versions of CentOS, Debian, Fedora, and Ubuntu. The binary packages cover DHCPv4, DHCPv6, DDNS, Control Agent, and all three (MySQL, PostgreSQL, and Cassandra) backends. ISC subscription customers will also be given an option to conveniently install hooks as separate packages. In the future we plan to experiment with packaging NETCONF with Sysrepo and all its necessary dependencies.\nFind these packages at ISC\u0026rsquo;s Cloudsmith repo.\nChanges to Kea\u0026rsquo;s Release Model Beginning with this 1.6.0 release, we plan to change the release model for Kea. We will stabilize every even-numbered minor version (where the second digit of the version number is even) for production use. Therefore, 1.6.x, 1.8.x, and 2.0.x will be stable versions. We will also begin producing development versions on every odd-numbered minor version branch, including 1.7.x, 1.9.x, and so on. For more details on the plan, see ISC\u0026rsquo;s Software Support Policy. Please note that the annual release dates projected on the chart are only approximate.\nWe have worked hard on Kea 1.6.0 and we hope that you will find it useful. We welcome user feedback at info@isc.org or in our GitLab instance.\nUpdated Software Support Policy Kea Gitlab project ","permalink":"/blogs/kea-1-6/","tags":["Kea","DHCP"],"title":"Kea 1.6 Adds Configuration Database"},{"categories":["ISC"],"contents":"Internet Domain Survey - Since 1981 Since 1981, the Internet Domain Survey has been made and the results published quarterly. Despite its name, it is not a survey of domains; it is a survey of the use of IPv4 internet addresses. The Domain Survey has been managed and distributed by Internet Systems Consortium since 1994. A year-by-year comparison of survey results shows the rapid growth of internet addresses in use. The results, along with a summary graph, have been released here on our site.\nSurvey Measures IPv4 Only Over the last few years, the rate of growth measured by the Domain Survey has slowed. This is because (as IPv4 space is exhausted) Internet growth is now dominated by IPv6 growth and private networks, neither of which can be measured by our survey. The 40-year-old Domain Survey predates by a couple of decades the production use of IPv6. The growth of IPv4 is slowing, but the growth of the Internet is not. We believe that the numbers in our venerable Domain Survey are now too misleading for us to distribute them responsibly.\nISC Retiring the Survey ISC is retiring the Domain Survey because we believe that its results are increasingly misleading, since they do not include IPv6. We believe that it is better to declare the Domain Survey to be a historical artifact rather than offer explanations for its puzzling numbers.\nHistorical Data Remains We have turned off the data collector. All of the historical Domain Survey data remains available at https://www.isc.org/survey/. We will not gather new data, but the historical data is still useful and interesting.\n","permalink":"/blogs/domain-survey-ends/","tags":["Internet"],"title":"ISC terminates long-running IPv4 Domain Survey, looks towards IPv6"},{"categories":["ISC"],"contents":"ISC RADIUS The recently released isc-radius package is a framework for NodeJS for implementing RADIUS servers and for adding RADIUS client support to NodeJS applications. It can be installed using the npm package manager.\nIt is not a full-blown RADIUS server with lots of bells and whistles. The design philosophy is that anything beyond the most basic AAA system is difficult to describe using static configuration files, so users of the framework instead supply business logic in the form of code.\nThe business logic functions looks at the attributes of incoming RADIUS requests, e.g. to authenticate users, or to save accounting records, and then populates the RADIUS response accordingly, while the framework itself takes care of the protocol implementation.\nBusiness Logic Examples Here\u0026rsquo;s an example of a trivial authentication handler that only allows a single hard-coded username/password pair to log in:\nfunction my_login(req, res) { if (req.get('User-Name') == 'myuser' \u0026amp;\u0026amp; req.get('User-Password') == 'mypass') { res.code = 'Access-Accept'; } } A similar function that instead looks up the username and password pair in a database might only take a dozen or so lines of code.\nMultiple handler functions can be registered, and incoming packets are passed to each handler in turn. A handler can exit the chain and generate a RADIUS response immediately by returning a true value.\nHere\u0026rsquo;s a function that adds default IP assignments if they haven\u0026rsquo;t been added by an earlier handler:\nfunction ip_defaults(req, res) { if (res.code.toString() === 'Access-Accept') { if (!res.has('Framed-IP-Address')) { res.add('Framed-IP-Address', '255.255.255.254'); } if (!res.has('Framed-IP-Netmask')) { res.add('Framed-IP-Netmask', '255.255.255.255'); } } } where 255.255.255.254 is the address used to tell most Network Access Servers to give out a dynamic address from their pool instead of a static IP address.\nThe package includes a test server (test-server.js) that includes both of the above functions, as well as test clients that exercise the framework\u0026rsquo;s RADIUS client code by initiating RADIUS authentication and accounting requests and act on the responses.\nProject Status Please note - this is an early-beta R\u0026amp;D release that we\u0026rsquo;re releasing as Open Source to the wider community pursuant to our public benefit remit. The software is licensed under the Mozilla Public License v2.0.\nIt has not been deployed in production, but it has been tested for interoperability by testing with the radclient package included with FreeRADIUS.\nIf you have an interest in RADIUS, please do give it a try. While this is not an \u0026ldquo;official\u0026rdquo; ISC supported project, we welcome contributions and reports of any issues at the project\u0026rsquo;s ISC Gitlab page.\n","permalink":"/blogs/radius-framework/","tags":["open source","radius","aaa","nodejs"],"title":"A NodeJS Framework for the RADIUS protocol"},{"categories":["BIND","DNS"],"contents":"The DNS Flag Day is an initiative of DNS vendors (both open-source and proprietary) and DNS operators. Its aim is to make the Domain Name System (DNS) protocol more reliable, secure, and resilient while gradually removing workarounds for broken DNS behavior. Sometimes it takes a coordinated group effort to remove support for a broken behavior; if only one DNS server package implemented new rules on its own, users could simply use different software that still permitted the unsupported behavior.\nThe first-ever DNS Flag Day was held on February 1, 2019. It targeted removing a workaround to accommodate DNS authoritative servers that incorrectly handled the Extensions to DNS (EDNS) protocol. DNS software vendors, working together, pledged to release versions of their DNS server implementation with these workarounds removed. As a result of the DNS Flag Day 2019, we\u0026rsquo;ve seen DNS vendors and operators all around the globe finally standardize their DNS server implementations correctly.\nFor DNS Flag Day 2020, the idea is the same: make the Internet a better place through a coordinated effort across participating DNS implementers, vendors, and operators. This time, however, the target might seem not directly related to DNS: IP fragmentation. The truth is that DNS is one of the few prominent users of IP fragmentation. When DNS messages are transferred between the DNS server and a DNS client over UDP, they can exceed the Maximum Transfer Unit (MTU) on any part of the path between the two endpoints. The MTU might vary between any two interconnects; while the standard MTU of Ethernet is 1500, the unit size is effectively reduced by encapsulation into different protocols (the most basic example would be VPN). When the MTU is exceeded, the IP packet gets fragmented (split into more parts) and reassembled.\nThis IP fragmentation is considered fragile and harmful by many; there\u0026rsquo;s an IETF Draft that describes IP fragmentation and how it makes Internet communication less reliable. The situation got even more complicated with the introduction of IPv6, where the packet must be fragmented by the sender; there\u0026rsquo;s a specialized ICMP message for that, which (not so surprisingly) might get blocked by incorrectly configured firewalls. Our APNIC colleague, Geoff Huston, did measurements on IPv6 fragmentation and he also considers the IPv6 fragmentation unfixable.\nEven if we could fix all the broken networking equipment, and all the broken configurations, IP fragmentation makes certain attacks on DNS possible. As the DNS Query ID and UDP port are carried in the first IP fragment, a clever attacker might spoof the second fragment and poison the DNS cache by swapping the subsequent good IP fragments with their own. If you are interested in the topic, I would recommend you read this presentation by Fujiwara-san: Measures against cache poisoning attacks using IP fragmentation in DNS.\nDNS Flag Day 2020 is an effort to fix the IP fragmentation in DNS by making small, albeit important, changes. First, the default maximum EDNS Buffer Size will be changed to a value that would prevent IP fragmentation. The recommended value is going to be slightly smaller than the minimum IPv6 fragment size, around 1220-1232 bytes. The second change stems from the first one; when the DNS response won\u0026rsquo;t fit into a UDP packet, the default behavior of DNS is to fall back to TCP. That means that either you MUST make sure all your DNS responses fit into a ~1220-byte maximum packet size, or both the DNS client and the DNS server MUST be able to communicate via TCP.\nWhat does it mean in detail? Authoritative DNS Servers Authoritative DNS servers must be able to respond to DNS queries using the TCP protocol. Beware: even if the DNS server itself might correctly support the TCP protocol, which has been an integral part of DNS from day one, there might be a zealously configured firewall sitting in front of the DNS server blocking the TCP communication over port 53. Next, the maximum accepted EDNS buffer size will be set to ~1220 bytes; the authoritative DNS server MUST honor the requested EDNS buffer size and never send a DNS response larger than the requested size.\nBIND 9 is already compliant with the TCP and honoring the EDNS buffer size requirements, and you can already configure your BIND 9 server to never send DNS responses larger than ~1220 bytes, by adding max-udp-size 1220; to the options {}; section of named.conf:\noptions { max-udp-size 1220; }; Recursive DNS Servers Recursive DNS servers must honor all the same requirements as authoritative DNS servers, with the extra requirement that they must never advertise an EDNS buffer size larger than ~1220 bytes. They must also be ready to fall back to requery using TCP, if a truncated DNS response is received.\nAgain, BIND 9 is already compliant with the hard requirements; you can test changing the maximum advertised EDNS buffer size by setting edns-udp-size 1220; in the options {}; section of named.conf:\noptions { edns-udp-size 1220; } Other Changes in BIND 9 Currently, BIND 9 tries very hard to guess the maximum allowed EDNS buffer size that will be accepted by the DNS server on the other side. We will not, for the time being, remove the code that makes this possible, and we will not limit the maximum EDNS buffer size that a BIND 9 user can configure. We may add a warning when the user configures the EDNS buffer size beyond the limit proposed by the EDNS Flag Day 2020.\nThe only end-user visible change will be the change of the default configuration for the edns-udp-size and max-udp-size configuration options.\n","permalink":"/blogs/dns-flag-day-2020/","tags":["DNS Flag Day","DNS Flag Day 2020","BIND"],"title":"DNS Flag Day 2020"},{"categories":null,"contents":"ISC operates the \u0026ldquo;F-Root\u0026rdquo; domain server, one of the 13 Internet root name servers known as A-Root through M-Root. We have operated F-Root for the Internet since 1994. The Root Server Technical Operations Association maintains a map that shows the location of all of the world\u0026rsquo;s root name servers. Over 230 of them are F-Root nodes.\nAnycast If you are perplexed that the number of F-Root servers is greater than 13 (the number of root name servers), you can read about the \u0026ldquo;anycast\u0026rdquo; scheme used to make the servers collectively behave as one server. F-Root answers queries over IPv4 on 192.5.5.241, and over IPv6 on 2001:500:2f::f using a hierarchical anycast technique and BIND 9 software. Network operators can improve their access to the F-Root name server, and hence the reliability of the DNS in general, by peering with ISC at the exchange points where we maintain a presence.\nPeering We have almost 3,000 F-Root peers. For information about peering with ISC, see our peering information page.\nHosting F-Root If you think your users would benefit from the improved DNS stability that a local F-Root Server would bring and are interested in hosting an F-Root node, please first read the Hosting an F-Root Node - Process Overview page and the technical requirements for hosting an F-Root node for more information.\nIf you are a researcher and your research involves F.root-servers.net in any way, please contact us at f-root@isc.org and we will be happy to provide you with first-hand information on any aspects of F-Root.\nLocations For a current list of F-Root nodes, see the Root Server Technical Operations Association website and select F-Root at the bottom of the page.\nWhich F-Root Node Am I Using? To ensure that your users have access to an appropriate local root server it can be useful to find out which instance of the F-Root name server you are currently using. If there is a site listed close to you but you do not appear to be using it, ask your ISP to contact us so we can make the local site available to their customers.\nTo find out which F-Root site is answering your queries, try one or both of the following:\ntraceroute f.root-servers.net The route your packets take to reach F should give you some information about which site you are using.\ndig +norec @f.root-servers.net hostname.bind chaos txt This command sends a query to the F-Root nameserver, asking it to reveal the name of the particular server which answers the query. The hostname will contain an IATA three letter airport code, allowing the location of the server to be identified. The dig utility is distributed with ISC BIND 9.\nF-Root\u0026rsquo;s Data Collection Policy ISC routinely captures all DNS traffic sent to F-root for analysis in the event of a network attack or to help diagnose network issues. This data is deleted after 10 days.\nOnce a year (or occasionally more often if significant changes are happening in the global DNS) a 2 to 3 day snapshot of anonymized F-Root traffic is sent to DNS-OARC for part of their \u0026ldquo;Day In the Life of the Internet\u0026rdquo; (DITL) survey. Our data is not shared elsewhere.\nLinks F-Root Network Peering Hosting an F-Root Server - Process Overview Hosting an F-Root Server - Technical Information Contact Us Root Server Technical Operations Association Documents All RSSAC publications, including those related to root servers, are available at the RSSAC site ISC Report on F-Root’s Compliance with RSSAC001v1 What is the DNS Root? Presentations F-Root Anycast Placement Research Using RIPE Atlas (UKNOF 2015) F-Root in Africa (AFRINIC 2014) Research on the DNS root How much traffic do the Root Servers see? Root response-time measurements - CAIDA Root response-time measurements - RIPE Atlas ","permalink":"/f-root/","tags":null,"title":"F-Root"},{"categories":["Company News","ISC"],"contents":"Where is the ISC website I\u0026rsquo;m used to? Gone. Well, hopefully we have a backup somewhere.\nThat site was built using WordPress, a very powerful, flexible, and full-featured open source software system. I loved WordPress and I really appreciate the team that has been maintaining and extending that software for years. Thank you guys!!!\nBack to basics We like to think that our users have superior taste, and don\u0026rsquo;t prefer a website with flashing, spinning graphics and lots and lots of repetitive marketing text.\nWe think our users want a short path to get to the 5 things they need on our web site, with fast download performance and less clutter. Now that we have a new Knowledgebase with a really good search function, we don\u0026rsquo;t need to put as much information on the website. So, we thought we would clean house and go back to basics.\nAlso, we had one bad infection on our WordPress site a few years ago, and we expect that this static site will require less software maintenance (we\u0026rsquo;ll see) and will be less prone to viruses.\nSo what is this site? We decided to use the Hugo static site engine. We hired Themefisher to create a custom theme for us, mostly because we saw they were contributing a lot of free open source themes to the Hugo ecosystem. The software on the download site, and eventually this whole site, are being cached by the Fastly CDN (for free, thank you Fastly!!) - so it should be much faster than the old site.\nWhat are we losing in the process? Some people will miss things they liked about the old site. I really enjoyed cooking up cheesy new graphics for the old site. (I suspect that behind some of the momentum for a static site were people with more sophisticated graphical taste.)\nWe did decide we could do without some of the bells and whistles we had on the old site. In migrating to a static site, we gave up the live RSS feeds from our Twitter channel, as well as the automatic updates from our user mailing lists and Knowledgebase. We have links to those things and people will just have to go and look there themselves.\nWhat can we do to make this site more useful and effective for you? If you have comments, criticisms, or requests, we are happy to hear from you at web-request@isc.org. If we can fix it, we will.\n","permalink":"/blogs/static-site/","tags":["website"],"title":"ISC's New Static Website"},{"categories":null,"contents":"We\u0026rsquo;re sorry, but the link you\u0026rsquo;re trying to reach is not available. You may have mistyped it, or you may have found a broken link.\nIf you believe you\u0026rsquo;ve found a broken link on our site, please email us at web-request@isc.org and tell us the page you were on and which link you were trying to reach.\nIf you can\u0026rsquo;t find what you\u0026rsquo;re looking for on our site, please go to our contact form and send us a message.\nThank you for your interest in ISC!\n","permalink":"/404/","tags":null,"title":"Oops!"},{"categories":["BIND","DNS"],"contents":"BIND 9.14.0 is our new stable branch for 2019. As of BIND 9.13 and 9.14, BIND has adopted the \u0026ldquo;odd-unstable/even-stable\u0026rdquo; release numbering convention. BIND 9.14 contains new features added during the BIND 9.13 development process. Maintenance on the 9.14 branch will be limited to bug fixes and new feature development will proceed in the unstable 9.15 branch. [BIND-Announcement]\nMajor changes since BIND 9.12, the prior (2018) stable branch, include: Modernization and refactoring: the primary focus of our 2018 development efforts has been on modernizing BIND, refactoring complex code, and removing features and workarounds that are no longer needed. DNS Flag Day changes. Workarounds for servers that misbehave when queried with EDNS have been removed, because these broken servers and the workarounds for their noncompliance cause unnecessary delays, increase code complexity, and prevent deployment of new DNS features. In particular, resolution will no longer fall back to plain DNS when there is no response from an authoritative server. BIND can no longer be built without OpenSSL, which it now relies on for a non-blocking CSPRNG (Cryptographically Secure PseudoRandom Number Generator). Some older, insecure algorithm support has been deprecated. ISC can no longer validate or support some legacy systems, including old versions of UnixWare, BSD/OS, AIX, Tru64, SunOS, TruCluster, and IRIX. Some code supporting these has been removed. On UNIX-like systems, BIND now requires support for POSIX threads, the Advanced Sockets API for IPv6 (RFC 3542), and standard atomic operations provided by the C compiler. BIND\u0026rsquo;s task manager and socket code have been substantially modified. The manager uses per-cpu queues for tasks and the network stack runs multiple event loops in CPU-affinitive threads. This greatly improves performance on large systems, especially when using multi-queue NICs. A new plugin mechanism has been added to allow extension of query processing functionality through the use of external libraries. We plan to migrate some existing code to this new mechanism over time, and to leverage it for optional extensions, such as new DDoS mitigations. Zone types primary and secondary are now available as synonyms for master and slave, respectively, in named.conf. As a result of these changes, you may need to update your configuration to remove options no longer supported.\nThere are two known configuration issues with this release: this release will not build with \u0026ldquo;\u0026ndash;with-dlopen=no\u0026rdquo; and named will not load a configuration with \u0026ldquo;allow-update\u0026rdquo; and \u0026ldquo;allow-update-forwarding\u0026rdquo; at the global level. Both of these issues will be addressed in 9.14.1.\nNew Features QNAME Minimization was added and enabled by default in relaxed mode. This is a privacy-enhancing improvement that reduces the amount of query information unnecessarily shared with systems. Development of this feature was sponsored by the Open Technology Fund. Mirror zones enable named to serve a transferred copy of a zone\u0026rsquo;s contents without acting as an authority for the zone. This is designed for (and only for) serving a local copy of the DNS root zone. Development of this feature was sponsored by ICANN. BIND 9.14 Performance Improvements We tested using the current head of each major stable branch currently supported, which includes 9.11, our Extended Support version; 9.12, the stable branch for 2018; and 9.14, the new stable branch for 2019.\nThe following performance profile was generated using ISC\u0026rsquo;s PerfLab software (available on GitHub). The zone files and scripts used are also in the PerfLab repository on GitHub. The figures are not a guide for the absolute performance you should expect to see in any specific deployment, but they show significant relative performance improvements in the more recent versions, compared to the BIND 9.11 version.\nComparison of Performance, measured as Queries Per Second, across 3 stable branches of BIND\nBranch BIND 9.11 BIND 9.12 BIND 9.14 1k zones 657 704 874 1M delegations 526 642 712 1M RRs 538 656 748 1M zones 591 632 745 Recursive 534 657 796 Root Zone 64 363 404 Performance has improved across the board, for each of the scenarios modeled. The huge jump in performance for the Root Zone is due primarily to the addition of a glue cache. The improvement in the other scenarios is mainly due to refactoring of the network socket code that enabled named to avoid context-switching (moving processing for a single query/response from one processor to another). Note that you will not observe this improvement if you test with dnsperf, because of some interaction between dnsperf and multi-threading support in BIND.\nRelated posts: dnsgen – a DNS packet generator\nBIND 9 Performance History\n","permalink":"/blogs/bind-9-14-released/","tags":["BIND","release"],"title":"BIND 9.14.0 Released"},{"categories":["BIND","DNS"],"contents":"In early 2018, we started using GitLab as the development platform for BIND 9. One of the reasons for that move was to take advantage of GitLab\u0026rsquo;s continuous integration (CI) capabilities to test every change to the source code before merging it. Once we made BIND\u0026rsquo;s test suite and the CI environment play nicely with each other, the testing process became stable enough to warrant requiring every merge request to pass the CI process before being accepted. However, stabilizing the test process is not a one-time effort - it is more of a never-ending game of whack-a-mole as all changes, especially ones affecting core named components, can introduce subtle variations in behavior, causing some checks to fail intermittently. We also keep on adding new tests all the time. Thus, not all issues can be detected before a change is merged and sometimes it takes hundreds or thousands of test runs to trigger a specific failure mode. We investigate such issues as they appear; most of them turn out to be imperfections of the test scripts themselves, but some are caused by genuine BIND bugs.\nRecently, one change turned out to be particularly disruptive for the stability of the BIND 9 test suite: refactoring the task manager. This should come as no surprise, given that the way this part of the code operates essentially defines how the numerous tasks inside named are scheduled and performed. We started going through all the test failures and quickly noticed that one CI machine stood out in terms of system test error counts. A quick look at a few of the failed test jobs on the problematic machine revealed that some named instances on it suffered from inexplicable, several-second stalls at seemingly random times. This machine was configured with the same CI job concurrency settings as other CI machines and was similar (though not identical) to them in terms of CPU core count, memory, and storage characteristics, yet no other machine exhibited the same problem. We were worried that the increased test error rate on that machine was a symptom of some bug in the refactored task manager code that only caused issues in certain environments.\nFirst, we tried manually reproducing the failures observed in the CI environment by moving the problematic host out of the CI pool and running the tests manually on it. At first, all the tests appeared to be working fine, which was slightly confusing. However, we quickly noticed that worst-case CI loads were higher than the ones we were testing, and we subsequently managed to confirm the issue by upping concurrency levels for BIND system test runs.\nSince the problem could apparently only be triggered under high load (and even then reproduction was intermittent), we wanted to get a better idea of the circumstances in which these mysterious stalls were happening, hoping that we could attribute them to a specific piece of code (either in the test scripts or in the C code being tested). Thus, we prepared a \u0026ldquo;quick and dirty\u0026rdquo; script to find and list large gaps between consecutive lines in named log files. While this approach was obviously quite crude as somenamed instances used in system tests are expected to remain idle for extended periods of time, it allowed us to see how specific named instances behaved on properly working CI machines in comparison with their counterparts on the problematic host, during failed test jobs. It soon turned out that apparently most stalls were happening when named was writing data to disk, but that was it in terms of specificity - stalls were appearing in a vast number of different situations.\nThis was a surprising find because the problematic machine was equipped with fast NVMe SSDs and, while running BIND 9 system tests involves quite a lot of small-block I/O, it should not be nearly enough to should make a modern server choke, even with a dozen of test jobs being run in parallel. Thus, to test the theory that the I/O subsystem was to be blamed, we performed a number of test runs on the problematic machine with all I/O confined to tmpfs mounts. Not a single test failed. After switching back to normal storage, the tests began failing again.\nThese results confirmed that some I/O issue was indeed the likely culprit. We could just permanently use tmpfs mounts for running system tests on the problematic machine, but that would make the test environment less realistic, so we still wanted to figure out what was wrong. We went back to comparing CI machines which were working properly with the problematic one, concentrating on I/O-related matters.\nThis eventually drew our attention to the mount options used on the problematic machine, specifically the discard option which was set for the / mount point on the problematic machine but not on other CI machines. Let man speak:\ndiscard Disable/enable the discard mount option. The discard function issues frequent commands to let the block device reclaim space freed by the filesystem. This is useful for SSD devices, thinly provisioned LUNs and virtual machine images, but may have a significant performance impact. (The fstrim command is also available to initiate batch trims from userspace.) Apparently the wording used in the excerpt quoted above is not overly strong because, lo and behold, running mount -o remount,nodiscard / solved the problem - the tests no longer failed. Looks like the SSDs used on the problematic machine did not cope well with at least some waves of TRIM commands sent by the operating system when a large number of BIND system tests were run concurrently. Once we confirmed that the problem resurfaced after enabling the discard mount option again, we became confident that modifying the /etc/fstab entry for the / mount point so that it does not include the discard option was enough to fix the problems we were observing.\nThe last piece of the puzzle was to determine how just one machine ended up with thediscard mount option set. The answer turned out to be simple: all other machines were provisioned automatically by the hosting provider whose machines we were using for CI purposes, while the problematic one was set up manually by a member of the BIND team. Coincidentally, the problematic machine was added to the CI pool around the time when the task manager was refactored, which prevented us from noticing the extraordinary test failure rate for this machine earlier.\nSolving this issue allowed us to retain confidence in the way the BIND task manager was refactored. While the problem turned out to be caused by a system configuration issue and not directly by BIND 9 itself, we still plan to make our CI environment more varied than it is now, in the hopes that this will increase the chances of finding bugs before the code changes causing them are even merged. While we are aware that we will never achieve a 100% success rate in this department, we can - and will - still try to get close to it.\n","permalink":"/blogs/a-curious-case-of-intermittent-bind-system-test-failures/","tags":["bind","testing","gitlab"],"title":"A Curious Case of Intermittent BIND 9 System Test Failures"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQENBFEKeFYBCADaN83gsb0VDjlGZkYra0PPlHz/eczKBU+/6I/VBq/FcsFEc27/ O8IE05rIID10rXLjZ0k8y4ydvhI40eVZfxwaFvQEX/StVtU1ie3F7TS02ZuJ1yal YRtU29hhnZ5icDdiJ98gcZSH2WKhIWLRpmc60Lja/sTsO0lkLPJe9x2MDuzkQu9M Z7hlMgqZxZ1I/mQ/KsjT3oUt8euwyntg8/w/cpY8H0EVjyBnZWV2yejsLnbCo947 hbjvUMSluGs7AZP0d+yqpGNsgRQ9iHy0NiL3ELdBqD22cqGRGTkX76KcLoXvqLVY 450bBtXsI2uUXy5iL/eUkUP2JgWQybjju/M3ABEBAAG0SEludGVybmV0IFN5c3Rl bXMgQ29uc29ydGl1bSwgSW5jLiAoU2lnbmluZyBrZXksIDIwMTMpIDxjb2Rlc2ln bkBpc2Mub3JnPokBPgQTAQIAKAUCUQp4VgIbAwUJA8JnAAYLCQgHAwIGFQgCCQoL BBYCAwECHgECF4AACgkQRax4Vxic28XzIQf6AwLblJ98KI6l8gWqKVHMErYgl9+Q RiIxrqJtyn4OjeZHX9diVjv2HlsRjnTpNl5MiSB9tXvq+GX696w6dtpoqYjZEQoP ZCwE2USR6XO71eYO3rxLBnc0ymRvQm4zB2YKqworQDym0+wE8xiGBO8LyyVDfS5G aGWXl0YJkfNYXzhEp6toIiLwRE0uP0TarHcHCo2CboVBgODvDZqwSBfT+i6dT+Gy 6nVEh3j7XnqgjCQ25cGev9sHR3hobT/fxG0F2YZ7sMwpWj9q0Y/dOlY7SV/ZGSs+ ubKQ55BWsTjJRrNqyDX8QLb8oVic5q/yQkV+RTs1sP5s6JSs0KqQdyR3ZYhGBBAR AgAGBQJRCnurAAoJELPQb+Q+GMAtN1UAnjQBuLl/IsHHbCyS7TwYmT8yifLfAKDJ mnGPxhXJgnJit8atkKAUfwrk8IhGBBARAgAGBQJRE9dTAAoJEDLH2cQTuJG05IUA oOTfQlPBUYZXFP7kimqfCHyHP/CTAJ93dSpYmAB/tCcMVXeBrCQYnt3oJohGBBAR AgAGBQJRGYSPAAoJEBCjuW7iGIwGa/4AoI29RoSof3H/MjZ3vYc+mi3dayARAJ9h J3zPkQRmhck9Zfg8sALWLOk4EYkBHAQQAQIABgUCURVNDQAKCRBXiU/QLbVDxgmF B/wOionzIxilL7hmO4MJUxUBXZOQMV/k1DxvBoQ0M4TXmEWA17tRQnLDm0ou43NH V9Rem/dlzcOdsN5wgoLDu256w+ctpX8UNjMEnhLGrJG3z/vlzKmJPleDGSootgBF DOMNxYSY6OelBqenv09h1n99UHFmeM1mSvHobrpynKoWUI5lWciSEtg3rn0lg0pT t2OAAjhXofaqxi/TppQAgtTuC1ACI4z+Ust5/0mHji5C3SRtf/l9s7PLp3h0HTDI fhMdpGFvuovxv+EhM4pNjXK6QoXt6Nn2jhXwawvD7cVh6mZeF9Lbk1wYsPQmAoKA CV6eFKiShH6ioj934jEnh5dXuQENBFEKeFYBCACnG27rzwkpyvo1dY6d5ZefU+2L aNGF1Uo2U2QaAsnvq9BNV0yaZMPAS3JhAzoL2qMhePxwTDP4sZFRegJc5f1iiR/m g6M2lC62fgpVL4hG9YxhorSqxNpr9zQVoiIVa+zDIa5xcBf+r0fkigry5gpe/nDD v7HeXV62D39B0ND8IA3IxBuMGffxZ+cXL/OhSX6Yxkt+qz7c3BPkL6b/gz31KjIL tdAtpm3c8E7itWNScyQKCJew10Mhme/ReBQrLNKCt09lyooQbfKPy/lpM4LaTrQV /DX/K0DeHJ/EuZ3HGzIbI1DhljxnGYFnoDJRZUr3lsvihCpQKp/NKxwelaiRABEB AAGJASUEGAECAA8FAlEKeFYCGwwFCQPCZwAACgkQRax4Vxic28XgkAf/QyVT4jC7 mTogjcq58TZqAyxEPaGTVWSllwLTyqIt8fFf/gesJl3uNxWIukV4r4jkloMzYbbU Kj2MEyg+H4RQOdqUbd6CVeB40RtR8Bmkztz1R6NM1WavDpytyB5SEpLaXSgG72BF GVxIkRhjxeeB8okcGxwKuxp+ndExWQ6bhOLgJzE6i3e5DXRTwaiOO1antuCohoUE XxpXy+6BckuX0geqBUS2+j2aIz7RN+J687O5kyuBVjjMjZmItytUzUq4h5nacgps 1aQmkMfZnBR+wjH8pPGOmqV341K3MZAH2/XBrMgEMdBTiGrl106WTSWssMCYwnMd tq+V25aYVGS6/g== =9KZz -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/201314pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQENBFR9szQBCADAwiKoEyFh32NHLc+c1vGgZ2lDKvXd7MoZZAOM2NUJkaDqt+Xy 537ycGmZRqukpGUsa0fDeqjOq+l4paX3QnW1DEs8ypnWi5Zme4lYHysmNeG33ihe SMeGRbnZi+9VTho3NGv3iM9gtRMDXTB5wi7OCu0XNypUtR2afP9F9EgRs4nJ5iMz o5c1y0wH4y+lJ33NCHZDj8ditl64yMbuVlZOD3UxoQqw/8E6T4rHly64g9s6Ld7o YybSN/hKIXFZRqrTebEd2LwZqGhPQIgYDUdumwvtw6jfrosbMOx0tan3esnxlptl fbkmOsoi8pPJqlo2VtLBx9XN53ZFMqc/Pj3tABEBAAG0TUludGVybmV0IFN5c3Rl bXMgQ29uc29ydGl1bSwgSW5jLiAoU2lnbmluZyBrZXksIDIwMTUtMjAxNikgPGNv ZGVzaWduQGlzYy5vcmc+iQE9BBMBAgAoBQJUfbM0AhsDBQkEEtKABgsJCAcDAgYV CAIJCgsEFgIDAQIeAQIXgAAKCRBvpuvJkRpMAhD3B/ip18kQSE4//Yt8VMF6kciD bcCQAOwy45/1frZaX9wS5ri/pdHxB0vw4SwrpJgJEkLGG0sEzRwS87QeFCtGDhIw J/+5JX6in0cJ+hr2G7DJUpVgKp3Y8lEZ3ch+gAZ3k45uzzoOTT4tm6uXWlkOnlPJ jUb1s9bVzL9zUsJ7XaV5x4aR3Bw53/izJEIaxWHj1CEjAzRC7wPQpo/EK+fWzFBf enSCBJEvW1vkomtb1jMdj+uhod1LzFdXHdQjDCSKn/Glpn/YcZ9/LkiUSwNLTpL4 CxDbEis4QWNDndAR/u9REC1zDW/xQEC3UxsB06Zm9NaWjAHy8s0rOihgoetaroaJ ARwEEAECAAYFAlStc2wACgkQlumWUDlMmaznBwf/ULOnlymSvFX840O4vmChh39S k1Ls/9WWjBhLe4xR7QERbklKnR3OCp8ExBR7ndwf43rAV7HbdCUlsIzhLV8S6iUx YjVft/wom2ny8xRNH0OAwg1UzkrD5h4zu53xEs57ioE5zI9QNgrrGvjDEDjO4NCo DGqMkeUTHuDX0QP3Ilfkln3wWaw9LSuwDeHcu+R8WkhS6FMzf649K6S1k3CNC0M+ zxVCbJ1S2AFFOgD8olB9OJu59SvNHAjvcdqQ9Qf+nCPtieR/LZiQpGlbfRG0PIZm 3eE4w0Egc/PnuIVcuDh3vLxm2BU51w4ZKxz0JgHBnGgBaty/mvfREhbj18GbA4kB HAQQAQIABgUCVK10GgAKCRC9l9xjnujEe/4nCACI7f0Xd0U3nPckD+uXDyPjeIxi QEp86MV4lO+anTmC/zOA2KqUsd6sRULzwd8l15VYpXqvxufM6YW1FHjhKPODj9bP P0nbHRjen6znjGs2smLjMZRvAUKoXOeQ6e1yseUAga31Doqz3O1zEGICq0sCPr5e VTQhdVfljKZ26ypijeqRab4c3GFix/skkzIbOwa46wv15X/6OmWt5Wsimt1zEkNN THilS4aMmCdCpeGX47q+lLepXngBJbvhW0lqeJWFDr69ONbFEWclCJraNmIWD+7m FrBvncJxBq2NIHzzocqVvbExt4XxP6Nk4aIgiNB3lV/rF7Io3S41dOSI4HO7iEYE EBECAAYFAlSqwTUACgkQs9Bv5D4YwC2XOwCgyzqRPw321UGSlg8BsLTL30oIp5sA nj0V/+8utHZDkXfCAVdaIIQNJmD/iEYEEBECAAYFAlSrw4MACgkQEKO5buIYjAZG ywCfbu462hUgo3zPQHOm8Wo266j8qWUAni78PlKHBZ28H2Rjq+KhVJ45nZhHiQIc BBABCgAGBQJUrX88AAoJEFdBJy0DB+J/vd0QALS/vqKga3uDu7BcN64XZEOKjY/C fqaqvOBToGerBfy7ypQRYVa6EmZCy0RpGTj68uyjaUtp3/kMSNqfL5OpHDIDgZHp D2StDiakzNpUu0shcMNAbOQuMe8RGOWzLnF5ceGihSEZ73xH7yjJLTKSWKJl39pH 60Y5lE8mYr/TNnb1o2LafzugnA3iHpz8aJ6hrZ07iLCcptsih/49cCEdzM0C9XS5 +GBcNu5/hmNu4Zs5UnMzZ6PiIucemoCKoe9m+JVl1yz72YFz6uuToMnTe5U1qSss CnKRHEegFKRsiHOiIE9Py8T1AxkIhktaRZZnv3UspYIrgm48/qIc2aZZpxnC9Ec7 JdFONZfioFpVG6Boh8RN1U/LS2bCihNHsotDByMWWbk/bTH5ahYL8SBx9ezU+8bq EsNzrYeyoKC32bKSThR39X1AWKBpkJB/iwjka5Gv+MPbh/M3VvQFVXQN/e2pD0lv KYqjQQERJm5isD9Egz2gfh8TQ8tIARtybYR/+QFuJwtishbGDFwIoDTGhDvfts88 sTsQzwclgwFlpRmXx6pdNq5PE0FT1zpSENWOHJWJNwG/5whhsij5zqSFIvrgU9Pf bMNoNhH/QBSfr674lJVeO/FaXSg8wc4A462ApJcE5TBkLqv2+AFYfruQo9ivRkae FU3mVt9F92ZiBc6LiEYEEBECAAYFAlTrQigACgkQi9gubzC5S1wIQQCgieUn67RR 0OpW6y7mAYDbI0tsZgoAoIcrlgiZcRUQWEuguOXdkR/6oUtriQEcBBABAgAGBQJU rZ8AAAoJEEWseFcYnNvFh58H/Rw0pv4aSJqkKFDiYoifDujObgqaqusYxal11jlV juce5J8sI6mcUsVvLqXJU3NPCMxbANWzbgZHbYJn2lgmba5Pg87FgR+Bg5bd1FSV bXp8np2T/ioclepOtIef/gFkNTBCkfbPXmuXpFlBI4HVCuQCOJjaWx2kIjHHL93C iW2mx/W5hQnjSdePNmcVcqa7kg1z2vIEsIZPM2rzpvUxYuqZd3KJYk6+GjxIMQUU ZCLQkm5c5OToQm+Ez3Sh+yt1yG5t/rRGLBsP2fwQnDrdnuU0e8xo1/KuI/6dGJsV BRbvBJkE5loGBXESvc9xAG+ZtdqvDUyVE4RtwoxmL3N/axKJARwEEAECAAYFAlTj mzwACgkQOntTjMkbgr+gLQgAiq7zKEvY6bLDuqQW343wHIq+T/lPdQFDeR/QmZBM RTXZmlkWRA3GZNn6fAkbYjWrGlKPw2eYtBWS0WP2LwRvrmrTOp6GCoKjxdGUCA7/ tfylytlTbVr27R6gJPoMZR+8hxij52qIxX5ULfVgx9DIagnG55dClOeMYDHjqIk8 uOB6GVTcZjhl9acavxrN7PlOSr7SHEYZUmCwsbM1hBVmN0vomlUDw0yC+zY3j9po CbmbzexxhV16cgX4Wve4KOMmdiJSfaAYC5dz8gA/ruNu2tHm00A+GzwGtmEnPE3I B2xmDE5kx78m6rF7adJlRhL6NeM+xn+n+k7h04IXlZUGAIkCIgQQAQoADAUCVUg3 zQWDA0hN5wAKCRAfRF6wIguS7NiaD/9nVUAIXSZx8xq2Lo37E6je1APbdSAT/K2T dLTMJsugzORVj4LVaSE4lOskMa3g8ffMdAnfGhEB5uLG9THDxOsvQ0ipdNdTr2hd pbo0v9J63H44LEaqKppOV3eHOJYXnfWI6Armv31cUGAuJkI1GZUIInL9jE80Ky/z YmkdSQwogPiSjb7i3ftS0QI7MR/hgJWz7vYLZ7o+Sqyse3PKl9Bw6SVL9oy05u/K x9nih5nh4mYWokfY3cEKVV9mhlTApkCgRqPWPVLN4/FfgoPIzKVG71wAVgkNwuZO AJH6+nNSV/N/nITyqLz0ddt4qxGtepZ+6P3FF7Qi4UNSGnhyHBbBkMbG3rRvo0VS gxWBUJQz3GAZjkS/YwGzR4F4P+IlFzj+1iKxke6vjGnXN06FzpsSYJ46j9EOk/+j 2K1ir/pUn7hF4QeLUCnxQL785kUiKg2+8KWkM6hGawUsvH9TqAfqCp8Mu4YDGXG0 KoBMeD7h6N9DZYGGk/kqbFhitnKZ2VanheaI6wFCjTCwDAnySfib+i1uKxaeddmS LMYiFlGz0E1pwjgALQF+qm6XHn6yD/vFLj/kiP1BtjY9dUTb6Yg3W01CyBOeFA/i SGZzWPgsbghlM7DJ16Jxq/Ij3M/7wfg6TiBgJoXLk7YXyWzGf2l2JI1kkLlZDCV7 vAwC5+7ORLkBDQRUfbM0AQgAreBuzWxpZYdYbGw+6E5/7HXyDT/43t/h9kTtNN60 82F06BjQsbG6X1UdGhmC89ueb4FmZmFgYyv+haVMS4X6xr8OvxIqpUONgiPSo7OK YwtiBophXxJDwuFQW1RCOblXkn8VD3s3M7Z8+AQijslshWuGw8Saz7WCX3Jqcpb/ 4dSi+XspTI1Gi99ol2k7i7K5j0IbPzvkOoTf0SF+UFY33pukJSan079VCzh4lOnq mS/YSQxVwRp20VcBUWVIV785R3zeBtVcz00JF9j3xkloHY6cjX+VTw6iZ0c637Pq Htt20L/DjaqyMGa77LYW2HhZfM93tZt+9V2UyNsEdkufwQARAQABiQElBBgBAgAP BQJUfbM0AhsMBQkEEtKAAAoJEG+m68mRGkwCERcH/Rer02pHu+BJlN53t6xIxao4 tZmWgp2PHKzcED2HcRcKQXqAx0Thvo6QhQfRXZoUJgldmbIWiXS2rnRoRMPrZoKA I5W3GWTHlUe1MuX9PVV6d/1MMhKISYMEj7cMOqMDdg4fzoWqXsmr4QxnHh7YvnQ9 ZbJFHcvOOnpMV71eHiupAmgjQ4G0vJfXc1978CvhnzkNfo7Wy/8IzRpaR8Zt0wB5 4esNVEpqtodQvLJfiR+76ZmPwC2HEylUPJoeKX1YJfW7Aaj7quSK4Ay9RFufTmHk L3a3ZBRAkTWu8EqnMsQFbypCrl+ij7zIcdanO57iLiva4Qnf4UtSyO2VT7cX4+o= =VdqJ -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/201516pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQINBFf1aL4BEADaw6pPFCoWEtbcGEbfFRsCxEK2PDjzG7+PWTDUpdJgzMvHKvWU BkKKpxxkWk6+irY4fZnaRkXKR6ggkTDRXucpssayXt95ZXdniOWGOuEGvGmIcif9 klfvDLxK3dq1WrsRGs35FE4puxldS0RAS8dcRlq0bqMpnaSPxay8bdvQF8v5syIF vW2ySfT21e1YgaMdSCu92kmg7lzrPccKFNuX3xkosGIglnoVcjpXqsZEIZjj6YAw cZiEGB1Lxc88WjWbhrct0S1Z4zITapRAFdY65i1POmHmcyqEDlhYvbPIfk99PUvB o6SbvE5IGChc+O5cqwp9i8sTw/ABewUkv2rcRfjaehQzIm6HHq3lX+ukqinic1fc +FsZnQNQXUoh9z3InKPzWkxOcc1DiXkMcXUdxSi7C0zghR/tFKTLHeTOxj8j6oaX DfWpdhBfFch2ogVQXZMyPaQxuObtG9aVffbpQsTHzAitz5/M7lXj2044DE8p9gcq ORMZnqAE/uVuYvGzdQZJEx8pDma4Aegx/Nn0Wpv19U2zw2dfGon3Ckrdi8G77K10 5++BB0ZFDia93kkEodcyJtdLMhFSxXV2XMMN8frO4jhHq86lnG71kbb5Y2ZdrkXz BCGo6sVMVqWWEXUp4COfgEboeDneLUTlVLFQbgqpgWCCFZKz/k0hQpZbOQARAQAB tE1JbnRlcm5ldCBTeXN0ZW1zIENvbnNvcnRpdW0sIEluYy4gKFNpZ25pbmcga2V5 LCAyMDE3LTIwMTgpIDxjb2Rlc2lnbkBpc2Mub3JnPokCPwQTAQgAKQUCV/VovgIb AwUJBF9JgAcLCQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEPGxG/Bc8C5XwvEP /jRPx7GaTG4PqXiNptV36r1q+Eg92oRozUqNcuhN/l09gThyClfQylsEhZBcTT3I VAjqqqAott5XNqw8pyPjERMFnqal0zBMZTln9RLkLnvoWQrHnEzG3CJ9Ndkk7niM DOpRI71h8+nMl7YbkXOy87qamvoZ2BQZcdL+a8R5p733JaPMDUy5leZKNiywThQQ iEtpWuz5u+zoALP5EKedPKCr0+xfGTUS5qptR2nHnRXnZouPfbQWKyEWtC3Qbiu+ 0ir9e2/4pp725g+os8TDCyzfRDLgD1cbxPrObwDAiw1B8KrL5l9WD70vfYpyMuvj aUVkNbL5kFEuKKOrQSxyehyR8pgUyUnYUejmSdXDg/BmOpXr6gscnRDpq7Th6yew 85Dy5ntYHVAO2Qq6Iafnsun0/i1g0Wlv+OEbN+YIXTMbweCt5qddC1ak6I0WomiP Frh8a2EeFrpRGEsKR9aFxUb+HdobclBMV4T2siewcW12W+MWnvRzVcMxFs/Yf1Wi 4DTb0sAztpftYk3vOVFkhIlTtgp0u9GPkSgCwIeW1ZCVMnxaWMfzDvzjt0cp09MQ a6MBwIoKPLIRwVOzoyIhS9PrI26e/hdcQPaJo6ESYbJGT64lTbkVoaWkKKbJbJrG gTnIzoxDL7l9rlYGOy3G1M6jZvT8uFanMM9vl22QfT5nuQINBFf1aL4BEADuNlry 8PqyjDsl/olp5bUc3Y97jKyHAYNghg0VrwwoKtaGGieWXaV7HtUslmtmUlj7L8Dy zvBvOlmKZwGnAEA6UJSHEY+VbYzjxKaPzTHtWJpZtfskZkyCQeYy7yChxk500khU 8wNP5+zdkFU+d95jlj0Q26KS/ZLnK/0wZoSzGRNV2+CeTlPztPuzotcpY5eg/7X0 D/ie8WE7BOK2BoxNWHvmpmHjeAoW5asBlJ6Hyd7ON0tbP8+ieHbcAxapkr9IEr12 +/RPhmurAwRT1W3XNKPbqHhEMmanpjv9QFZnzhusPumebS9F3S3IdSPHnZZsCupV 5Ve7d6zqvbdb0f/nE/7tR+7jEFNe491kJUnzXTN7rKfjJkIVkxlYDkDH8K2r7kNb Ckds2sRN+w8/v8REJkA31FGhf2oO6mgV+Na3NpYtSN8tS2x9oC5cMteElvnBxoDf E0YUHSv5Ny5sGjjW/+RHqq1VxqKBHYbblQpfR1OsEsWzRWJ4R5F8Sum8k9XfRAdq T04gE9DlmGsg4ZNn4d1R+SvXaojIRhzOrFu2thEvQJp1cYMjM9CSIU+q3021E+li ED7zg5d5D19l2iVAmJkXfKsFdv4mHS1OM/IDgtu0vg7g4qRRB4oGQRlSl288Elpu wo/mO1LXwyqk6oH3pZB+Fu7axo1xN5EPvYZrFwARAQABiQIlBBgBCAAPBQJX9Wi+ AhsMBQkEX0mAAAoJEPGxG/Bc8C5Xys8QAK+Zf+5dONEeAjc5+qT4BxKrB9H1czGd X2IgjsBrMDCHZCFRfvUhMG905b1ibO16xwgEw/VrXwJQujWSKc4uLHuKF9SNdwp/ oBUpyltHm0ZUVJzpnQpLuNowYc2hKULRSRKjURkBw5n0gK+OmvCA2uRHERBdUokJ mVrajTIVtYOuqzMvRUYgObo1EAi0LLwloCIDZl7y/WeAJM/4scvfRRPvZ2zCmn7Q kKZxae1nWKRCiSuZdVV9E05sOC0c2P4EA+pqaXyHLkZ2/XOHL0eVeks7zl7qp4yB WaZfmU/h3SBPvxhk0IQr+PZ0Nkzq+gqV0azh6Oy6a5Sz1OhJtxdax9h9WwO1KcBR a6IC8UOtbGHuHa0TGKt7/3Y+W53zZgyJ1rK9QbJMVSnNzFcaWpsKewJk0iNSMLSp lrEUPaNdGULGfxG1Hx1jRCadTWsbqHRtpkM+fSaAfdMKlPEIlzegD+mXKWLoZ+D5 WyFgoNxnUV1SWYh/H+1nKAIAE9FsqIHYM2vlLhV3lVhkUZRH+EC+tWgTR/9fltxF q+51plQhtod1i5Zk/WAioWnma45K4Onvd12sh0RsQh5LUl8NzL6M8IV+XbDpO45F JqFQTqHt7B/HcxEICgdWTISBQDVdH7shzk3iZ2ir/2UFWTmgmbny4m0BZuASzmnm TLm3Bfj8ss6q =3BFg -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/201718pgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQENBEsnyOQBCAC+WKYHcaDGD9a2ztwdccmhnzo0YJ9SOPVIonqGxCa6o8Twfipy H9Wh5uFqc6Ne+I0+vWohQtBIeodQHgGGlbjdxMp3QxohvqCmZcoObH7kjI1lpKQ6 TMvRvo79pq5WRIuE+Jbm6Rt9Hz3w9APrYzISzeCuHFK7vlZfaLdoI0SxzEuzMmyO vQpa6dgSKQF6G2Nsbz1RYujZ6fE6GXtc+I9q44yJS4llYpUqYpPh5CsfbDx8HwMW Pj9eYOwjf3/vniFeG9tHIB8E+p7UBTG7bTVRlEgmyC1Ry0OPO3UPKjL/Dja8IeeJ amT0dlsRCD/O/WTyxQg9EceCgUvOz65puKkrABEBAAG0L0lTQyBTZWN1cml0eSBP ZmZpY2VyIDxzZWN1cml0eS1vZmZpY2VyQGlzYy5vcmc+iQExBBMBAgAbBQJLJ8jk AhsDBgsJCAcKAgQVCggCAh4BAheAAAoJEL2X3GOe6MR7gZYIAI4uSaX7HwU2ywaC O77q5O/Zzzp6M+Oi+z7G00GFpACMSDSjX8kbZoIIW7FXb9US/9XLwqDtjlrGOLbd zcZ/+EMsJM/HpT796YH8Y9gMpsH5SZq5gByeguT6KgjqF2w8yNX7/r0AM3psfVSv 9l595CoJWNbDBHqGpqMi55SqKLCQOJ8noDp4x+JVNcJ1AdK/QVSk+gEhLqrvoYNy G8B/C2LL1fBQC8sFkX/M1mKlkM83GnJlPWCrig69VlR+relppEuZ5G34z93c9NtC qdH6zLzzej2qmwLb/dYJYvhEEMDlgvlSWw6yaJTuuEfuqHonrgOTXgb3Ln8A7XJE CuSgWrKIRgQQEQIABgUCSzvYZgAKCRD0lNm/z5iQ+NGrAJ4oJF9sSlLPoJxOgv/0 8pIzsdZ/pACcCKMSIOyGV7hbe09ltOcLkLkDV5iJARUDBRBN/+m6VW6TcBa/RfMB AoG6CACWM9dBowYqeneVa5CLWkgpSNvoNzq4ldiH6l5NnCkgFYnVAC1Ld7iRj8FM Kw/O6nCwDtR1FNsjM2dPi0tWvxs6cCm3KtjePvAxGCtW9+cDUKic3kVq3eMm5sGc C16tTMg1Dnvd5SHM0nY4iUQ7yRgqph5ZMoGKbYKMPQ6EVbTJX19jLKmD5uOlGr9P jJ6blxqVJGXajwyRnyMCxwsF2fKkDkv8fJbWJzKaial5AIkQ7m4fFMDXH0tjSBN5 ljRzKX+hIt+ZUOrIP7b7CdaqdGjhsjarRAk5AgIdKNeod1AXclZI7s6vlvXtp9TM Ai6zl+e315MbRiHJyChItR4cnFG5iQEcBBABAgAGBQJLO+x8AAoJELJw54Etxv+C mJUH/2MeZycBPefqkeQXzK9HDuWRC3Pia0o6LGjnutLPSUV4ijLsilz+nmY+529z 5Of7zqJXL4uAkLT1CkqtQ1kI0IzQZV5fH8+Z7hz6CAgVRN2z4uNF9TFMWk/ltLsc QO1ejlPj2hfRkCudJylc3QLl/FUL7UDtDbOaPXPGPjcbt5Q3IInWEhL4Chnpy23y Slwcb1bR8XZDHt7wS9vxVPXnDvK+Y4CdMXehlN94zvNN1RgoY+VDhK/V5bIACOBj ZqJcBB0BcKJoqu/QbdTxWh/XjkdkRrXt8HmfcsdeunyoqOndEAJQzwfgsD1b9Ssx 10K0B04zjtkurZW0CZ2Wx5WKSdqJARwEEAECAAYFAks77cQACgkQ2BG1Pwt7rgBT 4wgAkueYK0cBbhTiZOLXy+ilu1Dzu84Giy7BWxXelvWgfjTkyhWic7oR8SeymBgU q+uh9gRiikPGHeUxJ5Ji1NLAkQngekEpsOZVXSJnEDTdY8DZ5w5x3/jN2Z676KUq aP5KlL0EEfHE8/KFpduIzXvfE/nYTVXD5w2wkn3aFq0n/GNiWIKClAhJrhr/wh/U mB5CEPzfELJ2vffsb680LIYorcO9gywM3oh9/9veUd9VvvBW8ErWhGP93xX1657i q36aEXRjYi+DIBNN2t6DFJcZBWOo0tNVG+LX2wngVNDNKnfqubF9jzlIpVBf/cgQ Qi088NJuWHMWZWqx5w00uTYTUokBIgQTAQgADAUCWKKzQQWDBWrqAAAKCRCW6ZZQ OUyZrKxMB/49sN/okF1CwesybWv+hdpHDmSTgI0Et5y1ZoD5sNF731pxlsAOu0DQ 8BMkDXgTramPBxHvoXfs8672chHk2bKcxPILwOmcR+Ikb1+qUY2dt2YZfp5DtTZV W3D1T+1VsWCFz2phXGTKb9NKj0Aov3oyS4aOkVMLP4jDzwDQnjm6QSMCQKaoysQU LUljK0IWn0xEPwqKkcHJKaEocEhKiKlKJztJs7iIiUB6D/73SsgOzPEda6hQTeGQ gGtF6NHbjJedHfyCtrAIHnpedMA0zaSKU3b159EHw34FJUp9znUScRGzbDB+sG0i 4FWav4D09v3sJNHXnr2fkTzexSw2LZq4iEYEEBECAAYFAlJURV0ACgkQs9Bv5D4Y wC01pQCgkx+weTioMN+ouCarKG9HnbqPyL8AnjhBhYoH2AaqgkLEBGEHIeMKZiBh iEYEEBECAAYFAlVAZbEACgkQEKO5buIYjAYPaQCcCbVXVVAr6Aw13vey4zBN6RHC thMAn1+mFjCAN/TXOye1s3cm3c50UghciEYEEBEIAAYFAk36rLIACgkQ00k+8NKX q44lPQCfZTnzpAn9U03nWawMnYD6r2+1r5sAoL6WBUr3rfgLDQEnGENwjn+rELqH iQEbBBMBCAAGBQJVP/FeAAoJEJ8CrGJYCl/P86oH9iF0VZc9yKuC2TYoAut3IJkV A5lww2alhxzVO8ZEMY35lMQCoDxXM0j11K0kTIRmVhuep72qQuW7oKnLSoFClPvw cBOnEcgPoBv5w/zwd/i2bcO55dPsDmelwguATkm4ENxziTlNg5+xSeyQEAy1cqIy O4j3gZzp/R0eVNVkSIntK9Ifg5oxESzWac5EGwAzl2J+rxpdfweC9NN+Z5xxL4f+ c+tNCJGOmw9mQqJ+arO00VB0BrBfFND4S25xVBzdHIKpA94DD/zeS3hY9EXX6Hxe k55fumA+AsXcNekt704stP0+hd1L1vQfgkAPCR26dW7Kt2xA+vm6Oe3OBrjk7YkB HAQQAQIABgUCVUPsYQAKCRA6e1OMyRuCvycCB/4n8EcWnulQwCIXZOpSvbVrMJyC /fsitpN8zqhUw1n/X+DY/tAAaQMcEUwhJ+qRV5TEOSRZokUwqk/KkJ4bk//MFlTS CZxXLfBsaF/qa1pUCvpKTbZPmvOn6sHj214RoVT1L3S7/k90R9PVjHGQEZlZBBcv 38ld1vZfgcHLaHEYJN7fHT4WZnRWoWmYxKgj7kd0OSfCqST9uoxJDTt80TivHk6M haBP+IfWPJsvIwrrfyoerow9c76s35jj/9u/44860LjaekYbJvFSRYLfYPhN1H0P YJo2DAjifZn6h9r9VzGvNvxW7hMiKKhL8CpSF1raBJi5Rm93UbOh4LYu8y85iQIc BBABCgAGBQJVSDhUAAoJEB9EXrAiC5LsG1IQAIp1nepAgxqzG+Yd6xW4DqIcE7yf /0OzGdxkcCUCUUHumoVddpXHh/Ny1px6XlR037riawYsiCD7gwIh5kPkSiT3a6x1 rteWmeXDAMDW2w0RiBKPy9LsrvyGdUuxQ/eY/S18/L4Xuox93+Yk27fKy7NYmTlU nWPFBWzG46MbyvoPB5KXg/dEKAQrpf2RY9SKOVGaaKH7NzHx3cdb01wMVdtYR2/7 VQTv6QAuYGeccPxW0DLiwWZLqimPHIH3TfzsQpOrv8cj0F9320dLy+wppSZuLQJB XybNiNEvOzYioksH9qC+SN3WzZoXw7nGYBn2D737f7Q+r2+y2PUgDyvc+CoU0ew1 UCgS5cnnKfLlv/jn8lpnE7AYxDczZ1mPYyn3DRuoGOy0JKxhLcY2NpT9AFI1yx+R dWR/xtrkdJZEmdVQL9Gl7nLl7fxbMWq8JgwtZGtkd/5zyo/flAeV27q/pRR3M2uu 59Ufe6OIS5lEllptlSF8A+Gsz7qAqTm/oDRbpPjlhoxgzBXSR43NWdLdMkaJOAyN s/QPiL0ibZ+uvT7nMEy65IgJRXVMikn7iopYw9a22kSt8yuYGWRYmXLpXxldega+ AB1c+t3sxRRA3d257pWhr7hkMPudknxuI62wbxirrp4FzKLbygEDVWqrXITlo5eO EhXD8eas6byyGIA2uQENBEsnyOQBCAClLK2OAMRfOx7wRKoQlyLdaPKZc1aCE+ZG P6dp+PFPqDRpeb+KjYguEkGSL7APhXHsPAYQGGzRUzrCvzSlTsoi5CzqkJ5LgZRE s87zHJdIErl9bffVdkiaQirpAmvo5E3IyZ1ETgyZ+VqfRiwFl7rcSbyZ07eKMXJw Crp+mbdb8jkLNpSPc1wk4vAHQyk4gPxLOugj1QkXDBVEupfQKzee4FD9CkY/DdDn zkD3c0oZxq0nbTNA9hItqZlHIJL0iHjVOR0XdojKVhjtS/MnxkL6e3c41mxu4v9k U52fikd2tjjJrV1DblkBdwH6IUtE0UwHAEeyIDi88stqa0/VyrQnABEBAAGJAR8E GAECAAkFAksnyOQCGwwACgkQvZfcY57oxHs3tQgAuAPimXUYtPIAcRuISrr6S9Hx 38J1Zqy0d/XNJkEfbRPolHDns6hEd89n5/2ZuZzE2XGrswlzJhV8ClpRrn4g611a igshB5GMgZ591PZLEYoKPHhv19/ht1EvZNHOxX1miKzmNiHRllEInNBDTrv/eFQB WpXiOAQUL5wyGOoa+BZteKl4LLBe2MHBK3qFFPgMoWHOdrZQ7LFPQ1Gl/75+zg+N hSM6VM4FMqGfe1LT1/pvT+WDdk3nqtIlZHMcZk/cLjWAgmW9VtN4b3APLsL0ZDQ2 3UP/3ZmtoF3VD6F3wSU+JC6NfyleRopjx3Z9Ide9hNUSi4CrayefE1UyelFdaw== =+oEO -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/sopgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"-----BEGIN PGP PUBLIC KEY BLOCK----- Comment: GPGTools - http://gpgtools.org mQGiBEg9iRQRBACSUmbNjuZC0NZ9mAwjAqd09UeksyX9MYjFaQAXZ1AzQVQ++Kvz Qe55+52XgdBU8E8331IxZMgx86dQjz4z5/Y5qIE5M0PUk9/Kgn2ff/43mby1AGqx je1Z2zCqXaGAgz52CASXC4lPPyldkGwYk+eWsdKV6eoNiLRHG936LQfzZwCgriro S259Zk0lIcl+o7ylg9Qu+mkD/3zJVlJlHFTq5tFdgkbe/Q8wqIGZhZ/d46LGskHq e0gwX6xqkK0rIFOytjBMiELJpT1aZxQingK7IucW8Jr/SNHKL/LwqeqQs9IzXTHS HPoCdeB/z3uBdEn1q9qqad/LA9K0kcfM7x8fO1M5LTyPYA1h4kOYgNRox8iQKvST eqOJA/0V/3yfnXop6gK5U54i0Q07q9lve8Ge+p4PHwV3UWlIfUSY7KER/ybcTdMo 8ShNQXa6qelYrMp3lq4PFu7IQibNTe2SWq5c9ekKgpjeP2ZLSPlGfjF7C04WUKP1 ww/vTOSkslwByW1aDPZ1nVJlQOcKbdok0uCoeFFkna6Y6hA2iLQ1SVNDIFN1cHBv cnQgRW5naW5lZXJpbmcgU3RhZmYgPHN1cHBvcnQtc3RhZmZAaXNjLm9yZz6IYAQT EQIAIAUCSD2JFAIbAwYLCQgHAwIEFQIIAwQWAgMBAh4BAheAAAoJEA2gUt/owZXL Yw4An3HratPO/UynNiauDjD8ANAK/8XNAJ0Vd3Qsxq767Qd7DAtJlgAERJPL84hG BBMRAgAGBQJIPYqRAAoJEHCqWFK1AwmH2x4AoIQKEDuNdMIc37Eb1uAtY6VW4BPG AJ9UcBntSNhlHV1SQ/Kn2mrvmwBvpokBHAQQAQIABgUCSD4g0AAKCRAjNBJOG8ke bAIuB/98XDfx8/GvgyOQAv+nxriV/y+b77sAjHdnTtjIQFio8rgItmtNxrB115dR rxfBJK+1JGWgVePz3584RL6s+A1jFvjn89oQLFm7OF3Y89lMNkypqpluUw65kCPV jSv2fJd4hYCxE7Vo81AEHNrrcHHj4S5MUppg+BMr52CsvIE6SZgyEa6fQktu4J60 M08tOvhcKNPOT7tXRCCmhj9aACxGQosbKvo6VuXWXIXt1nQEabOz0/cIaAIj71nd ifPkQnAKGmKFTyV/W19XnA396rMygkVGtpeFVydlY/no1/lUAO2UerhpNJp1BY1i TMvrpOoH59tWST/ieyzkBYiIrbXsiEYEEBECAAYFAlVAZrUACgkQEKO5buIYjAYd DQCfYeTH1SAiMnbnsI5ONc9kYfsUhh4AnAigQO+8MJzMwkIga0wfVS29+TxhiQEc BBMBAgAGBQJVQ+xuAAoJEDp7U4zJG4K/0EIH/3b+xr3iLvIolNDwFvnHimdz16jX f3EzlgNQ+750nHV20VXq3XmDvpaU1avmt5W0FQfqtTqRgcATGN08/s5Owq9y8xaB BqovcjBXsy0u6DNnqqs+3OLl8wIqjmfbR/l0bRnkYpCBlgaTwm2xjvgP4saywchw dVhpwCfMFhRQYQgBEOiQUfis34OSCwghnqb8qzL1vObqJi2X/rdjzRLA20JXCGb5 5mN67f2s0wXT4Ic1bJB4D960IWkshhFJ2QlG0Wi57k8aUuvhtxbTJNeijkos1Ik+ B+q4BflBJLhHJghDp4lDlbwbrLKp595WrzeBl3hztWjzEsUcpA0bkXyRPeaJARwE EwEIAAYFAlU/8hoACgkQnwKsYlgKX8+OTgf/TM9NOuYuJSBls2+W3IbqBq8Rj/4s mJf2Hs90kWGhnVUNVBZrCCnRDhzkhsaETggXtmXVc96gssD7eFqXoe+GWDx4USo2 eqPmKouN34hDxuks4c2KtFIaarL01BuKHvafRrUQ+A4wawsdDkWz198+4YPbuLu6 XqFOqvHjKMeimfMUMeN0mOMeM0C6ybt8wmWRwRGEcpZ9bBz7FcH57iFsjU5q9kiX 1Qs9s1m2HmjaJiAqe6iHlDmX6J7dOI+FbV5JC0yH0zRAblIDWMrGk3iJ+YIfXFFX WMyGaseJ95kD4X9f9YIxPF3b3p1b18bNW03kUwY4Rxj5KGRbCVwfaQ2+HYkCHAQQ AQoABgUCVUg4rwAKCRAfRF6wIguS7HXhEACjUT5hQm54c4FQAsRjMHSOXe7Ne3lP 3UvVrGAk1NthWZaNbJfAJ2UzhRM9JeVp6aEZUHXVhodBtAB/v7dvWY7DC+nCfJdq eMUPME/mAx/MbaRzxW/sc5pZbiilSkR0L911uVDjgIdrpz2MnzydJX/NdHUndYGH eRsU/OqPv7mW0uDsz1mNzVSeaTeLd8APxLKKiYEdS0SFEVoiW5YBZzkBMcK9KpqG 5u/AmHY3y8UBA4NJT83LQE5dQLVWbxYR7qST/T9tDPr/OBbKEy0QDSm+lvorjozd 0UL8nAc498avCq9TaRX2VbJVTZZYx/8wpYWeNWTpDsHGjWlfje1i69OUDUUIbH4y Ez/1AunXCAlHQlqrVlmsNDK1qxvxQmzF1Sihsb7fZR0W4nUoWHvM8bdbb2qTXU5l iOSC8zccTiEiTzFcGTJNJF0rXPQknQyHk08zzStnrOex6m0dKcfx31MFzmnHCl54 iALl7OS63NApS5VepYQ0CY13C5E2MQuaZxdcycgvn48UMuFifcCQjGQLguVSeeM4 yRK2RRBx5L+Q7qK37Z2iO7bwMSCacfor1hk6SNPqhqu9zFXGjZllGcyjANk1Rump LuXltumqHOKKCiKGWw86ecwyC25F12Keqbzcah4Z3GHwL/w6uf1lyrLjrcgulWEl 4a+giZEnPVjLQ7kCDQRIPYkZEAgAq6l4yJwnh5qjV/INceO3tSxbm3UP7HfF4Vwj BvHPXcSXtd6ZXFVssiZLqVZQirOMtr2pR5KSqJHfw3yW3XwiFP/h4PkHgmpBVG2I HQWOqvywzhgIvzqTpPxWE0jhHTEDD8rZSoDmLyPyUK7VReP/V2aqb7SD3NIOmue0 l7Q8BqtLxGemB4lsZ54WzTol772GQytpPkQIL8I4KgF2SNJ8+kShnSkOpeij6qGd 8X7dlswFqY817Bw67vcSp9fz+LaMwIoVJGUhr4xBSwdGq2Hbk/fa0Lieb2SbPFvC UrKC1lF0mbK67Mw4Pf4zBr4uZI++/PDZZFyX3DUqp/WMswnzxwADBQf/WcCF1dtT +MZ1NHWvpg82Ny+a9snypokw/gj2Ca4w/P4+U3HdexxzOsL7wlxJK4HWaGQsPBO2 k0rG+/gVr/22xIRgAQU6DPCmfRkZ/ixmGWefgWuD4HcPsKbBJg0goXEGwr2vVCah DcXxRfQU30OQfg40PuriDG0xwTLNUvTRy0k1eSq0/p8ctcDa49zLdM/00rU22oSK BgAsya+XdQRSrmJsbPISA/bkES59vCAmKUEma4Svhp3OWNMBpbE8QU0PdY6YFZm3 ugh1Q2hqBlougEzl/7S0n6/VfkBbzP/5GIAf1s8tCpV/yMav7Y4HTwlbvmma7Qqy bn6rGDigzYVjX4hJBBgRAgAJBQJIPYkZAhsMAAoJEA2gUt/owZXLKJQAn3D09C5c ydUn/PEQCEGj4zCRu26bAKCacCq0PydqjxuMXxuc43m4Mchlbg== =Orln -----END PGP PUBLIC KEY BLOCK----- ","permalink":"/supportpgpkey/","tags":null,"title":"OpenPGP Key"},{"categories":null,"contents":"How Did the Internet Domain Survey Work? The Original Survey The Internet Domain Survey was taken twice a year since 1981. The original survey methodology counted hosts by \u0026ldquo;walking\u0026rdquo; the domain name tree and doing zone transfers of domain data in order to discover hosts and further subdomains. It is described more completely in RFC1296. It looked at domains, looked at what addresses they were using, and counted them. It was called \u0026ldquo;domain survey\u0026rdquo; because it used domain data to find addresses that were in use: it surveyed domains to find addresses.\nBy July 1997 the Domain Survey was unable to count a significant portion of the hosts in the domain system, because many organizations began restricting download access to their domain data. The blocking of these downloads (or \u0026ldquo;zone transfers\u0026rdquo;, as they are called) had increased to the point that for the July 1997 survey we could only download address listings for 75% of the domains we discovered. We decided to try a new survey technique before the old one became useless.\nThe Next-generation Survey In January 1998, we ran the first \u0026ldquo;new\u0026rdquo; Internet Domain Survey. Its methodology was the reverse of the original. It counted the number of IP addresses that have been assigned a name. This distinction is subtle but it does mean the new survey is counting something different than the old survey. This difference made it problematic to compare numbers from before and after the change, so we ran the two in parallel for a while.\nHow It Works The new survey algorithm works by querying DNS for the name assigned to every possible IP address. However, if we had to send a query for each of the potential 4.3 billion (2^32) IP addresses that can exist, it would take much too long. Instead, we start with a list of all network numbers that have been delegated within the IN-ADDR.ARPA domain. The IN-ADDR.ARPA domain is a special part of the domain name space used to convert IP addresses into names.\nFor each IN-ADDR.ARPA network number delegation, we query for further sub-delegations at each network octet boundary below that point. This process takes a few days; when it ends we have a list of all 3-octet network number delegations that exist and the names of the authoritative domain servers that handle those queries. This process reduces the number of queries we need to do from 4.3 billion to the number of possible hosts per delegation (254) times the number of delegations found. In the January 1998 survey, there were 879,212 delegations, or just 223,319,848 possible hosts.\nWith the list of 3-octet delegations in hand, the next phase of the survey sent out an ordinary UDP-based PTR query for each possible host address between 1 and 254 for each delegation. In order to prevent flooding any particular server, network, or router with packets, the query order is pseudo-randomized to spread the queries evenly across the Internet. For example, a domain server that handles a single 3-octet IN-ADDR.ARPA delegation would only see one or two queries per hour. Depending on the time of day, we transmit between 600 and 1200 queries per second. The queries are streamed out asynchronously and we handle replies as they return. This phase takes about one day for each 25 million probes to run. The January 1998 probes took 8 days.\nThe Results Due to the differences in the old and current surveys, it is not possible to directly compare the host counts produced by each. However, we have tried to adjust the old domain survey host counts in order to make some comparisons. We did this by assuming that if we missed a certain percentage of domains in the old survey, the final host count would be approximately that same percentage lower than the actual value. So we took the old host counts and raised them by the proper percentage of domains we couldn\u0026rsquo;t survey, we could arrive at an \u0026ldquo;adjusted host count.\u0026rdquo; This allows us to have something to compare with the new survey.\nWith the new survey we are now publishing five figures per top-level domain (on our distribution by TLD charts). For each TLD, we show the total number of hosts found (which equals the number of PTR records found) and the number of duplicate host names found (which usually indicates a host with many addresses); then we subtract the duplicate count to arrive at the final host count.\nWe also publish two new numbers: a count of hosts under the 2nd- and 3rd-level domain names for each TLD. These counts have different meanings depending on how the particular TLD is organized. For example, for the .COM domain, the number of 2nd-level names equals the number of hosts in organizations using names registered under .COM, and the number of 3rd-level names is potentially meaningless. However, some TLDs, like .UK and .AU, have a few fixed subdomains at the 2nd-level (like .CO.UK), so the 3rd-level count shows the number of hosts within the organizations at the 3rd level.\nObservations Because the current survey technique uses ordinary DNS queries, and because these types of queries are used by many standard Internet applications, it is rare to find them blocked. This allows us to gather all the data we need without the blocking problems the old survey had. It also demonstrates that organizations blocking zone transfers in order to hide their host data have a false sense of security.\nWe decided not to try to verify the PTR entries we collected by trying to look up the name returned and verify that its address matched the PTR record. One reason is that this process would take far longer than the PTR lookup process. However, another reason is that there are a lot of PTR entries that are wrong, even though the host actually does exist. Cases were found where an IP address was pingable and had a PTR entry, but a lookup on the hostname did not return an address.\nIn our distribution by TLD charts, we show an entry called \u0026ldquo;ARPA\u0026rdquo; and one called \u0026ldquo;UNKNOWN\u0026rdquo;. The count for ARPA shows you the number of administrators that tried to setup a PTR entry for a host but left off the trailing dot in their zone files. These are hosts that probably exist, but have an invalid host name. The UNKNOWN count shows you the number of PTR entries that did not have any valid TLD name. These are sometimes typos, and other times entries for unused addresses (for example, a domain administrator might put in the hostname \u0026ldquo;unassigned\u0026rdquo; for any unused address).\nNote that the new survey algorithm had the same potential problems as the old: just because a hostname is assigned an IP address, or an IP address is assigned a hostname, that does not mean the host actually exists. To find out how many hosts actually exist at a given time, we ping a 1% sample of all the hosts found and apply the result to the total hostcount to obtain an estimate of the total number of pingable hosts. There are other potential survey problems, many of which are discussed in RFC1296.\nWhile comparing host counts per country code between the current survey and the final \u0026ldquo;old\u0026rdquo; survey, we found that a very small number of countries lost a significant number of hosts. We have not analyzed the data to find out exactly why this occurred, but it may be due to a number of reasons. We may just have had very bad network connectivity or packet loss to certain foreign countries that interfered with the data collection process. Another possibility is that in certain places it is not common for providers to place entries in the IN-ADDR.ARPA tables.\nAnother item some may notice is that our count of hostnames (or firstnames as we call them) has interesting changes. For example, the number of hosts named \u0026ldquo;www\u0026rdquo; dropped between the old survey and the new survey. The reason for this is that in the old firstname count, if a host had two names, for example \u0026ldquo;example.com\u0026rdquo; and \u0026ldquo;www.example.com\u0026rdquo;, that were both assigned the same IP address, the name \u0026ldquo;example\u0026rdquo; and \u0026ldquo;www\u0026rdquo; would each be counted as a firstname for the same host. In the current survey, a PTR record can only return a single official hostname for a particular IP address. In the example above, the new survey would count either \u0026ldquo;example\u0026rdquo; or \u0026ldquo;www\u0026rdquo;, depending on which name the administrator set up to be the official name. Since the \u0026ldquo;www\u0026rdquo; count dropped between surveys, it appears that the \u0026ldquo;www\u0026rdquo; prefix is used heavily as an \u0026ldquo;alias\u0026rdquo; for official host names.\n","permalink":"/survey-about/","tags":null,"title":"About the Internet Domain Survey"},{"categories":null,"contents":"What was the ISC Domain Survey? Internet Systems Consortium\u0026rsquo;s Internet Domain Survey, originally started by SRI International, is the longest-running survey of the number of computers connected to the Internet. This data provides an in-depth look at the active domains on the Internet. Despite its name, the ISC Domain Survey is not a survey of domains; it is a survey of IPv4 addresses that have a domain name attached to them. The name \u0026ldquo;Domain Survey\u0026rdquo; derives from an early algorithm used to locate addresses in use.\nHow many people use the Internet? There is no way to determine how many users are on the net, besides making guesses and estimates. Other organizations and companies may do surveys to estimate the number of users on the Internet. You might find some useful information at The Netcraft Web Server Survey, The Center for Applied Internet Data Analysis, or The Internet Society.\nHow big is the Internet? We consider the numbers presented in the Domain Survey to be fairly good estimates of the minimum size of the Internet. Obviously, we cannot tell if there are hosts or domains we could not locate. Whatever those numbers measure, they measure the same thing every time, so it is meaningful to make comparisons to determine growth.\nIt is not possible to determine the exact size of the Internet, where hosts are located, or how many users there are.\nHow do I figure out the breakdown of hosts per country from your data? You can\u0026rsquo;t. There is not necessarily any correlation between a host\u0026rsquo;s domain name and where it is actually located. A host with a .NL domain name could easily be located in the US or any other country. In addition, hosts under domains .edu/.org/.net/.com/.int could be located anywhere. There is no way to determine where a host is without asking its administrator. Many countries sell names in their national domain as a source of revenue; for example, the tiny island nation of Tuvalu (population 11,000) has some 80,000 names defined, because companies are willing to pay good money for a \u0026ldquo;.TV\u0026rdquo; domain name. None of them are located in Tuvalu.\nHow did you compute the adjusted host count figures? We start with the figure we published for each old domain survey (on the distribution by domain name charts), in the column named \u0026ldquo;Percent Domains Missed.\u0026rdquo; If for example it says \u0026ldquo;20%\u0026rdquo; domains missed, this means we got 80% of the domains, or 0.80. We then take the total host count and divide it by this number. For example if the hostcount was 800 and we missed 20% of the domains, we would divide 800 by 0.80 to arrive at an adjusted host count of 1000.\nWhat is a host? A host used to be a single machine on the net. However, the definition of a host has changed in recent years due to virtual hosting, where a single machine acts like multiple systems (and has multiple domain names and IP addresses). Ideally, a virtual host will act and look exactly like a regular host, so we count them equally.\nWhat is the relation between a host and a network number? There is not necessarily any correlation between a network number and a domain name. A single network number could span many countries, and a single domain may have hosts on multiple network numbers.\nCan I have permission to reproduce your data or charts? You have permission to reproduce our data provided that you mention the source as \u0026ldquo;Source: Internet Systems Consortium, Inc. (https://www.isc.org/ ).\u0026rdquo; For derivative works based on our data, you must say your data or charts are \u0026ldquo;Based on data from Internet Systems Consortium, Inc. (https://www.isc.org/ ).\u0026rdquo;\nAre the raw data available? We have some of the raw data available. The data for each survey occupies several gigabytes. In the early days of the Domain Survey it was very expensive to store that much data. Contact info@isc.org to see if we might have what you want. We make no promises.\nWhat terms are used in the Domain Survey? These are definitions of terms used by the Internet Domain Survey. Not everyone uses these terms with the same meanings.\ndomain name: A domain name is any name representing any record that exists within the Domain Name System, i.e. com, example.com, www.example.com.\ndomain: A domain is a domain name that has name server (NS) records associated with it. In other words, there may be subdomains or hosts under it, i.e. com, example.com.\ndomain server: A domain server is a computer system that holds all the records associated with a particular domain, and answers queries about those names. Often called a domain name server or just a name server.\ndomain (top-level): A top-level domain name, or TLD, is either an ISO country code or one of the generic domains (com/org/net, etc.). It should be noted that there is not necessarily any correlation between a country code and where a host is actually located. Domains named after ISO country codes are often referred to as \u0026ldquo;ccTLDs\u0026rdquo;, and generic domains are often referred to as \u0026ldquo;gTLDs\u0026rdquo;.\nduplicate host: A duplicate host is a single host name that maps to multiple IP addresses. When a host like this is found in the survey, it is only counted once.\nhost: A host is a domain name that has an IP address (A) record associated with it. This would be any computer system connected to the Internet (via full or part-time, direct or dialup connections), i.e. example.com, www.example.com.\nhost name (or firstname): A host name is the first part (before the first dot) of a host\u0026rsquo;s domain name, i.e. www.\nnetwork numbers (class a/b/c): A network number is the first part of an IP address, which identifies what network the hosts numbered in the rest of the address are connected to. The class a/b/c system is no longer used and the data are presented that way only for historical reasons. It is no longer possible to tell how many network numbers are in use by looking at IP addresses, since the netmask can not be determined from the number. Also, note that domains do not map directly to particular network numbers and no correlation between them can be inferred.\nzone transfer: A zone transfer is the process of downloading all the records associated with part of a domain from a domain server.\n","permalink":"/survey-faq/","tags":null,"title":"About the Internet Domain Survey"},{"categories":null,"contents":"Power Requirements Our system, as requested, comes with dual power supplies, to both protect against the failure of a power supply and protect the system as a whole. ISC requires that each PSU be fed from a different power source that does not share a common breaker — and that each breaker have enough overhead available that it can handle the entire load of the system at full power. ISC will set CPU throttling options as possible on the node if at all possible.\nPhysical Security Requirements Because F-Root is a \u0026ldquo;small\u0026rdquo; service, it is not uncommon for the system to live with other \u0026ldquo;core\u0026rdquo; equipment, often close to an Exchange switch, rather than in customer colocation space. While ISC attempts to tamper-proof our systems as much as possible, we strongly recommend that F-Root nodes not be placed in shared colocation facilities where the general customer pool has access to the machine.\nNetwork Connections (Physical and IP Requirements) All F-Root servers offer IPv4 and IPv6 service, and we require that the management and exchange connections be dual-stack. (The connection for the remote management card can be v4-only, if need be.) ISC requires several subnets/connections of provided IP address space for each F-Root node:\nA single IPv4 address, with a static default route, to be used for the Drac (this is a copper-only 10/100 connection). If you have a router that is capable of placing a simple ACL, we can provide an IP block to restrict from during initial setup, but this is not strictly necessary.\nA transit link with a /29 of IPv4 address space for management of the various systems functions, and a /64 of IPv6, with a BGP session originating a default route. By default, we expect this to be a gigabit copper connection.\nA connection with a single IPv4 and IPv6 peering IP address for your Internet exchange. This connection can be fiber or copper. Dual-Stack IPv6 (as well as IPv4) is a hard requirement.\nThe standard F-Root server has SFP+ optics, and thus is configured for connection at 10G, or at 1G if compatible down-clocking SFP+ optics are used (regular 1GE SFP optics will not work). If your exchange supports copper gigabit ethernet and your planned location for the system is within the length requirements mandated by the ethernet specification, this is also workable without needing a special optic.\nBandwidth Requirements F-Root nodes require reliable upstream bandwidth, for the ability to transfer the root zone from our distribution primaries, as well as monitoring and provisioning. Additionally, there are regular (several times annually) coordinated exercises across all F-Root nodes wherein all DNS data is captured and uploaded to a location off-site; reliability is essential for this task. (99.9 percent uptime is required.)\nURPF Requirements Because routing is asymmetric, an F-Root node’s prefix may be advertised to clients which themselves do not advertise a return route via the same protocol; thus, there is the expectation that there will be a fairly regular stream of DNS responses returning back over the management transit connection to the Internet at large. No attempt must be made to limit or restrict these, although ISC can provide a list of IP addresses from which these packets will originate.\nNetwork Neutrality and Firewalling ISC requires that the traffic going to/from an F-Root server not be modified in transit or interfered with in any way, including but not limited to: TCP/UDP Port Blocking, Rate Limiting (except as provided by physical interface requirements), modification of DNS queries or responses in-transit, or limitations on which clients may be served by an F-Root node.\nCommunications and Remote Hands ISC requires your organization to provide Administrative, Technical, and Abuse contacts. We require as much advance notice as possible for service-affecting maintenance issues.\nBGP Route-Server Preferred (IX sites) If you are an Internet Exchange, ISC strongly recommends that your organization have a route server available and that our node peer with the route server, so that anyone who wants to gain the benefits of F-Root need only peer with the route server. In non-route server scenarios, ISC advertises the F-Root prefixes with the well-known BGP Community NO_EXPORT (65535:65281). With a route server present, the route must still be advertised to peers with this community set, but route servers present additional challenges. Your route server must either be configured specifically to \u0026ldquo;overlook\u0026rdquo; the NO_EXPORT community but re-apply it when advertising it to other peers, or provide a community whereby ISC may signal that this community must be applied to the routes.\nISC will provide instructions for network configuration and for BGP configuration of the name service interface. We will finish the configuration and bring it online remotely. Our servers expect to peer with RS (Route Server) devices, but under special circumstances we can do individual peering with clients that do not use the route server.\n","permalink":"/froot-technical/","tags":null,"title":"Hosting an F-Root Node"},{"categories":["Company News","BIND","Security"],"contents":" Surveillance is the business model of the Internet. Everyone is under constant surveillance by many companies, ranging from social networks like Facebook to cellphone providers. This data is collected, compiled, analyzed, and used to try to sell us stuff.\nSource: https://today.law.harvard.edu/internet-privacy-afraid/\nPersonal data, of the sort that is exposed while using the Internet, is valuable. If you share personal data, you should assume that there may be someone out there who is motivated to collect, analyze, and monetize it. One step in minimizing exposure of your personal data on the Internet is to use a DNS service that implements QNAME minimization. BIND 9.14.0, the new current-stable version of BIND, features QNAME minimization enabled by default. This is a significant change in the way DNS name resolution is done.\nWhat is QNAME minimization? RFC 7816 defines \u0026ldquo;DNS Query Name Minimisation to Improve Privacy.\u0026rdquo;\nQNAME minimization changes the DNS queries from the recursive resolver to include only as much detail in each query as is required for that step in the resolution process. The IETF draft describes it as a technique \u0026ldquo;where the DNS resolver no longer sends the full original QNAME to the upstream name server.\u0026rdquo;\nLet\u0026rsquo;s say you want to visit a blog site at https://someblogname.bloghosting.com.pl. In order to determine which IP address to connect to to reach that link, your computer sends a request to your ISP\u0026rsquo;s resolver, asking for the full name - someblogname.bloghosting.com.pl, in this case. Your ISP (or whoever is running the network you are using) will ask the DNS root, and then the top-level domain (.pl in this case), and then the secondary domain (.com.pl), for the full domain name. In fact, all you are finding out from the root is \u0026ldquo;where is .pl?\u0026rdquo; and all you are asking .pl is \u0026ldquo;where is .com.pl?\u0026rdquo; Neither of these requests needs to include the full name of the website you are looking for to answer the query, but both receive this information. This is how the DNS has always worked, but there is no practical reason for this today.\nWhy is DNS privacy important? Sharing the full query target with intermediate servers may not seem like a big deal in this case. But what if you were looking up something controversial or even illegal in your local jurisdiction, such as information on a political movement that was prohibited in your country? Would you want your query to be shared with domains that have no practical need for that information?\nConstant background leakage of this sort of meta-data, combined with improvements in big-data analysis, has made it more feasible to profile and track individual user activity on the Internet. As RFC 7816 states, \u0026ldquo;QNAME minimization follows the principle \u0026hellip; the less data you send out, the fewer privacy problems you have.\u0026rdquo;\nThe DNS Privacy Project has an excellent summary of the reasons for concern about user privacy:\nAlmost every activity on the Internet starts with a DNS query (and often several). A key function of the DNS is to map human readable names (e.g. example.com) to IP addresses that computers need in order to connect to each other. Those queries can reveal not only what websites an individual visits but also meta data about other services such as the domains of email contacts or chat services. Whilst the data in the DNS is public, individual transactions made by an end user should not be public. However DNS queries are sent in clear text (using UDP or TCP) which means passive eavesdroppers can observe all the DNS lookups performed. The DNS is a globally distributed system that crosses international boundaries and often uses servers in many different countries in order to provide resilience. \u0026hellip;* Note that even when using a VPN some VPNs will still leak your DNS queries by sending them unencrypted to your ISP. Use the nice tool from anonymyster.com to check is this is happening with your VPN! \u0026ndash; Sara Dickinson on DNS Privacy.org\n* Deleted points concern additional risks associated with the exposure of meta-data to the resolver operator. QNAME minimization does nothing to reduce data shared with the resolver operator; QNAME minimization on its own is only a partial mitigation for protecting user privacy.\nWhat are the potential technical side effects of QNAME minimization? Some implementations of DNS authorities can be confused by minimized queries, and will either not respond, return an error code, or return garbage data. In that case, BIND\u0026rsquo;s implementation, with the \u0026ldquo;relaxed\u0026rdquo; qname-minimization setting which currently is the default, will \u0026ldquo;fall-back\u0026rdquo; to normal querying, and will re-send the query with the full QNAME. A 2015 research study on QNAME minimization found that 12% of minimized queries failed, almost all due to a few large CDNs. These problems have largely been addressed, but there still are operators that fail to properly answer for minimized queries. The current \u0026ldquo;relaxed\u0026rdquo; setting might change in the future to \u0026ldquo;strict\u0026rdquo;, which will fail to resolve a name for which authoritative servers don\u0026rsquo;t answer properly for minimized queries.\nQuery loops are more common when using QNAME minimization.\nExample query loop:\nexample1. IN NS ns.example2. example2. IN NS ns.example1. We have added query-loop detection in BIND\u0026rsquo;s QNAME minimization logic so we won\u0026rsquo;t get hung up in this situation.\nA single authoritative server may be serving both parent and child for a domain: isc.org and .org, for example. In this case, the records for .org should properly include a DS record for isc.org, and isc.org should include an NS record for .org. However, because traditionally the query contained the full path, servers that lacked the pointers from parent to child were able to verify that the DNSSEC chain was valid. With QNAME minimization, if there is no explicit link between parent and child, DNSSEC validation should fail.\nIn the case of zones that contain labels that are multiple levels deep (reverse PTR lookups for IPv6 is the classic example), QNAME minimization can require more queries than previously necessary. There are limits on how deep BIND will go with qname-minimization, and BIND will \u0026ldquo;jump over\u0026rdquo; some labels when querying for IPv6 PTR records. This difference will diminish as the cache is primed with answers.\nWhat are the potential technical side-benefits of QNAME minimization? When a resolver gets a response of NXDOMAIN, meaning the domain does not exist, the resolver will stop querying. BIND 9 caches and reuses negative responses to avoid superfluous queries.\nQNAME minimization makes each negative answer more useful. For example, when querying for \u0026ldquo;department.school.university.edu\u0026rdquo; with QNAME minimization, a negative response from .edu would apply only to the question university.edu and would be cached as a negative response for that higher-level query.\nThe demand for online privacy is increasing The Pew Research Center has done extensive surveys on end-user attitudes towards their online privacy. Unsurprisingly, they are finding consumer concern has increased substantially over the last few years.\n\u0026ldquo;Pew Research Center studies have shown that people are anxious about all the personal information that is collected and shared and the security of their data. Overall, a 2014 survey found that 91% of Americans \u0026lsquo;agree\u0026rsquo; or \u0026lsquo;strongly agree\u0026rsquo; that people have lost control over how personal information is collected and used by all kinds of entities.\u0026rdquo;\nSource: http://www.pewresearch.org/fact-tank/2018/03/27/americans-complicated-feelings-about-social-media-in-an-era-of-privacy-concerns/\nSome of this concern about social media and data sharing has already carried over into concern about DNS privacy.\n79% of respondents to a 2017 ISC survey on DNS privacy believed that products and services that improve end-user privacy will get marketing benefits from the improved privacy.\nMultiple new DNS services that promise to protect end-user privacy have emerged in the past two years and are seeing tremendous adoption. Quad9 (which operates a DNS resolver service at 9.9.9.9) reports that they are seeing growth in their user base of 5 - 10% EACH WEEK. Cloudflare, which operates 1.1.1.1, has also implemented QNAME minimization on their service.\nQNAME minimization is only one element in an overall privacy protection plan. It does not encrypt your communications, nor does it ensure the integrity of data received. However, QNAME minimization is important to minimize passive data leakage, and it is one end-user privacy step that requires absolutely no effort or retraining of the end user. It is also easy for the service provider to deploy and does not add any cost or performance penalties. We hope that in 2019, QNAME minimization will spread on the Internet so that it will become the new default standard for a well-run DNS service.\n","permalink":"/blogs/qname-minimization-and-privacy/","tags":["qname","bind","security","isc"],"title":"QNAME Minimization and Your Privacy"},{"categories":["BIND","DNS"],"contents":"DNS Flag Day is a public-benefit movement, like a community trash pick up day in a common area everyone uses. Everyone who has and will participate in supporting this initiative should feel they have accomplished something worthwhile. Some operators thought it was unwise that Flag Day was set for a Friday, and were taken by surprise by the level of user concern. Some of the Twitter posts were kind of silly and needlessly alarmist. Characterizing this as a \u0026ldquo;Flag Day\u0026rdquo; exaggerated how sudden the real change on the Internet would be, but the effect it had, of providing everyone with a goal date, achieved the objective. The DNS community as a whole did accomplish at least three things over the past year:\n1) Approximately 4.8 million domains were tested in January alone, using the ednscomp.isc.org site. We rate-limited the tester to 10 queries per second, to limit its potential as a DDoS tool. We averaged 2-3 tests per second for most of the month! About half of these, or 2312494 tests, were initiated through the www.dnsflagday.net site. This many people testing and validating their DNS implementation, has got to be good for the overall DNS hygiene.\n2) Throughout the month of January, we saw significant improvement across the Alexa top 1 million domains. Although we have seen gradual improvement over the past couple of years, that slow rate of improvement accelerated sharply as people saw the Flag Day deadline approaching. The chart below shows the most critical EDNS compatibility issue – servers that would not respond to a query that included an unknown EDNS(0) option. These were the servers that would appear to be unavailable after Flag Day.\nThis problem is almost always caused by firewalls blocking EDNS, and it was dramatically improved by Flag Day.\nWe saw significant improvement in the most important compatibility numbers for the Alexa top 1M sites in January. We were already at \u0026gt;96% success on these basic tests at the beginning of 2017, but fixing the last few percentage points was going very slowly. That accelerated sharply in January.\nCountry-Code Top Level Domains The most wonderful thing was, a few top level domains tested all of their child domains and published reports about their compliance. We know that Switzerland, .ch, downloaded the ednscomp test tool and tested their domains a year or more ago. A few others who also have been working on improving EDNS compliance published compliance reports for Flag Day:\nNZ https://internetnz.nz/news/global-change-set-break-nz-domains CZ https://en.blog.nic.cz/2019/01/24/almost-all-cz-domains-are-ready-for-the-february-dns-maintenance/ 3) Quite a few vendors and operators took notice and made progress. A number of vendors, including firewall vendors and hosted service providers, tested their own products and services and published their status and plans to remedy remaining EDNS compatibility problems.\nProduct Vendors BlueCat: https://www.bluecatnetworks.com/blog/dns-flag-day-is-coming-and-bluecat-is-ready/\nCitrix: https://support.citrix.com/article/CTX241493\nDNSimple: https://simpledns.com/news/78/simple-dns-plus-v-8-0-build-108-released-dns-flag-day-update\nEfficientIP: http://www.efficientip.com/dns-flag-day-notes/\nF5: https://support.f5.com/csp/article/K07808381?sf206085287=1 and https://worldtechit.com/dns-flag-day-for-f5-dns/\nInfoBlox: https://community.infoblox.com/t5/Community-Blog/DNS-Flag-Day/ba-p/15843?es_p=8449211\nJuniper: https://kb.juniper.net/InfoCenter/index?page=content\u0026amp;id=TSB17513\nMicrosoft Azure: https://azure.microsoft.com/en-us/updates/azure-dns-flag-day/\nMicrosoft Windows: https://support.microsoft.com/en-sg/help/4489468/windows-server-domain-name-system-dns-flag-day-compliance\nPalo Alto Networks firewall: https://live.paloaltonetworks.com/t5/Community-Blog/DNS-Flag-Day-Are-You-Ready/ba-p/248284\nPulse: https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB43996\nSimpleDNS: https://simpledns.com/news/78/simple-dns-plus-v-8-0-build-108-released-dns-flag-day-update\nService Operators Dyn: https://dyn.com/blog/what-you-need-to-know-about-dns-flag-day/\nExabytes: https://support.exabytes.com/en/support/discussions/topics/14000013075\nGoogle: https://groups.google.com/forum/#!msg/public-dns-announce/-qaRKDV9InA/CsX-2fJpBAAJ\nQuad9: https://quad9.net/dns-flag-day-2019/\nValimail: https://www.valimail.com/blog/what-dns-flag-day/\nThere were some balanced and informative news articles as well. These are just a few of them:\nCarsten Strotmann on Heise.de: https://www.heise.de/newsticker/meldung/DNS-flag-day-Initiative-gegen-veraltete-DNS-Server-und-Middleboxen-4289779.html\nCricket Liu on Tech Target: https://searchsecurity.techtarget.com/answer/Will-DNS-Flag-Day-affect-you-Infobloxs-Cricket-Liu-explains\nAPNIC: https://blog.apnic.net/2019/01/17/are-you-ready-for-dns-flag-day/\nLACTLD: https://lactld.org/en/novedades/interview-hugo-salgado-cl-dns-flag-day.html\nMalaysian National Cyber Security: https://www.nacsa.gov.my/announce_dns.php\nMost of these links came from the DNSFLAGDAY site! In closing, we would like to recognize and thank our colleagues in the other open source DNS projects – NLNET Labs, CZNIC Labs, Power DNS, and DNS Masq – and DNS-OARC, for their solidarity and collaboration. We love you guys! ","permalink":"/blogs/dns-flag-day-was-it-a-success/","tags":["DNS Flag Day","BIND","ISC"],"title":"DNS Flag Day, was it a success?"},{"categories":["DNS","BIND","Kea","DHCP","Company News","Annual Reports"],"contents":"As a non-profit with few reserves, whether a year was a good one at ISC has a lot to do with our financial status. 2018 was a good year on that score. Most of our funding comes through software support contracts, and in 2018, our support business was strong. We gained 22 new support contracts and lost only 10. As a result, we were able to add quite a lot of staff: three new BIND 9 developers, a senior QA person, a systems and support engineer, and a documentation/communications person. That is a 20% increase in staffing, which is huge. One BIND 9 engineer left and went to one of our OEM partners, and one BIND 9 engineer who had previously left for another open source project returned to ISC.\nUpdated Development Environment The biggest event at ISC in 2018 was the migration to GitLab, for both the BIND 9 and Kea DHCP development teams. The ISC-hosted GitLab replaced the separate issue tracker, wikis, and repos for both BIND and Kea with a single development platform. We also are leveraging the continuous integration capabilities of GitLab. We invested quite a bit of effort into getting our automated BIND 9 tests to run in parallel to speed them up. Now nothing is committed to BIND 9 without first passing our continuous integration tests. The Kea team is also making extensive use of the GitLab wiki pages to post design and requirements documents in the open. The adoption of GitLab was fairly disruptive, but overall it has been a positive change.\nUser Participation Is Up The shift to GitLab as our primary working repository brought us several unexpected benefits. We opened our issue tracker for BIND so new issues are now open by default; only a few sensitive issues are non-public. (The Kea issue tracker was already open.) We are able to accept contributed patches more gracefully, via merge requests (we used to require email attachments). It seems like we are getting better-quality bug reports, which could be because they are public. We have been pleased to see many people create accounts on https://gitlab.isc.org and start actively opening issues, commenting on existing issues and offering to help (although anyone can browse without an account).\nWe Welcomed New Kea Users Our support business is a reasonable barometer of how our open source work is being received by users. Many of our new support customers are adopting Kea DHCP, which is gratifying because this project has been under-supported for the past several years. While the IETF work on DHCP is winding down, and the standards-setters are enthusiastic about router-advertised address assignments, DHCP seems to be a continuing core requirement of most networks. We have seen a healthy interest in our “premium” Kea hooks software package, which provides a moderate-cost (US$499) way to support ISC’s open source work.\nBIND Refactoring Is Well Underway We are very satisfied with the BIND 9 refactoring we have completed so far. Our initial goal was to reduce software complexity, but we are also adapting to modern cryptography standards and taking more advantage of well-supported LINUX OS features that didn\u0026rsquo;t exist when BIND 9 was first written. In January we released BIND 9.12, with refactored query logic (query_find and resquery_response). These massive, ridiculously complex functions were split into smaller functions and we created an external library, libns. These changes simplified the query processing code and made it much easier to subsequently implement QNAME minimization. We continued with the refactoring, tackling BIND\u0026rsquo;s network interface next. The results of that will be seen in 9.14.0, where we expect to see significant improvements in performance on modern hardware.\nThese are the highlights of 2018 at ISC. We go into more depth, or at least more length, below.\nCustomer Support Our busiest quarter for customer support was Q2 (April-June) – and it wasn\u0026rsquo;t followed by the usual summer holiday slow-down, so our support team has been very busy all year. BIND 9 is a steady workload; ISC DHCP ticket volumes are gradually decreasing, while Kea questions and issues to our support team are steadily increasing. This is to be expected, since we are ending the year with many more Kea support customers than we started it with. We averaged just over six new support issues per week, and the return of Alan Clegg, an experienced, former ISC systems and support engineer, helped us to handle the increased workload.\nScreen Shot from kb.isc.org showing the BIND 9 feature matrix Other support-related activities:\nWe migrated our Knowledgebase to a new platform. The old software was unmaintained and starting to decay, and we wanted to make sure that the KB would continue to be a useful resource for our users. This was a big project for our new Marketing Communications person, Suzanne Goldlust, because the markdown and links all had to be updated and we needed to maintain many of the old URLs, since many of the documents are linked from elsewhere. So far, user response has been positive. We saw increasing demand for binary packages from ISC, for both BIND 9 and Kea. Our users have asked us to incorporate software dependencies (particularly for BIND 9 – DNSTAP support), provide a fresher software version for older OS releases, and offer a \u0026ldquo;supported binary\u0026rdquo; with our support contracts that includes our premium features. We responded by contributing BIND 9 packages for Debian and Ubuntu in their repositories. We have also created our own RHEL/CentOS 6 and 7 binaries, both for open source users and for our BIND 9 Subscription Edition users. We are definitely appreciating the challenge our OS packagers have had for many years, in deciding which of BIND 9\u0026rsquo;s many compile-time options to enable for their packages! We expect to iterate on this more in 2019 in an effort to improve the ease of installation and updating. (https://www.isc.org/blogs/bind-9-packages/) Customers and users asked for resources on performance tuning and information on the impact of the choice of OS and features on BIND performance. We have shared what we have, but we can see a need for more experimentation in this area to meet the requests. Unfortunately, this isn\u0026rsquo;t what our Perflab tool was designed to do and it isn\u0026rsquo;t really suited to this job. The continued inability of the DNS standards community to settle on a “solution” to the CNAME-at-zone-apex problem is frustrating BIND 9 users. (ISC has participated vigorously in the debate about CNAMEs at the apex of a DNS zone, and has put forth three proposals: https://tools.ietf.org/html/draft-ietf-dnsop-aname-01, https://datatracker.ietf.org/doc/draft-sury-dnsop-cname-plus-dname/, and https://www.isc.org/blogs/dns-and-web-architecture/.) BIND Customers are curious about DNS cookies and anxious about whether this change will introduce problems. DNS cookies were first added in BIND 9.10, but now that they have matured they are enabled by default. After much fanfare – and years of preparation and testing – the root KSK rollover in October was wonderfully uneventful. Far more angst was caused by GDPR compliance. Now we are gearing up to be ready for a possible spike in questions related to EDNS compliance issue identification and troubleshooting, related to the upcoming DNS Flag Day. (https://www.isc.org/blogs/end-to-bandaids/) Projects – BIND 9 Ondřej Surý, our BIND 9 Development Director, drove a lot of changes in 2018, including ISC\u0026rsquo;s adoption of GitLab, the addition of three new BIND team members, a new BIND release strategy, and termination of workarounds for legacy systems.\nEarly in 2018 we implemented the BIND 9 release strategy announced in late 2017; you can read about it at https://www.isc.org/blogs/bind-release-strategy-updated/. The goal was to pare down the number of supported branches to (1) development/unstable, (2) stable, (3) extended support and (4) subscription edition. We accomplished this by declaring End of Life for the 9.9 and 9.10 branches in 2018. It is too early to tell if the new model will result in improving the quality of the early releases on our next stable branch, 9.14, but it has already enabled us to get more users and more feedback on the development branch.\nBesides the investment in refactoring described above, 2018 was a year of tough love for BIND users. When BIND 9 was originally written, there was a wider variety of available operating systems, including HP/UX, AIX, BSD, DOS, and Linux, and we proudly supported all of them. Many of these are no longer supported well enough to be viable options for most users, so we decided it was time to cut our support for them and focus on improving BIND for the majority. We didn’t support antiquated software to be stubborn; we did it because we really cared about supporting all our open source users. But we reached a point where that no longer made sense.\nSome of the changes:\nWorkarounds for old versions of UnixWare, BSD/OS, AIX, Tru64, SunOS, TruCluster, and IRIX have been removed. On UNIX-like systems, BIND now requires support for POSIX.1c threads (IEEE Std 1003.1c-1995), the Advanced Sockets API for IPv6 (RFC 3542), and standard atomic operations provided by the C compiler. Previously, it was possible to build BIND 9 without thread support. BIND now requires threading support (either POSIX or Windows) from the operating system, and it cannot be built without threads. BIND 9 will no longer build on platforms that don\u0026rsquo;t have proper IPv6 support. BIND 9 now also requires non-broken POSIX-compatible pthread support; such platforms are usually long after their end-of-life date and they are neither developed nor supported by their respective vendors. BIND can no longer be built without DNSSEC support. A cryptography provider (i.e., OpenSSL or a hardware service module with PKCS#11 support) must be available. DNSSEC validation is now enabled by default. The BIND team partnered with other open source DNS projects to announce the “DNS Flag Day” in 2019, ending support for “fixups” to accommodate non-compliant DNS implementations. This is, to some extent, the culmination of the campaign Mark Andrews started back in 2015 to expose and correct the gaps in EDNS standards compliance. We also removed support for Windows XP(tm), finally. Windows is still a very popular platform for BIND, but there are no engineers on the BIND team who specialize in it, nor do we have adequate automated test coverage for Windows. We had an external contributor, Rockerinthelocker (later revealed to be Thomas Jach), who contributed several needed Windows fixes and also advised us on some Windows issues.\nAfter removing those legacy system requirements, we were able to modernize our networking stack to take better advantage of threading and reduce context-switching. Witold \u0026ldquo;hold my beer\u0026rdquo; Kręcicki spent the last three months of 2018 refactoring the networking code. Through a massive series of commits, he eventually managed to successfully replace our ancient task manager and socket code. The manager uses per-cpu queues for tasks and the network stack runs multiple event loops in CPU-affinitive threads. This greatly improves performance on large systems, especially when using multi-queue NICs. This project is equivalent to removing the bald, flat tires from a car, while it’s in motion, and replacing them with steel-belted radials. We are expecting that these new tires will really pay off when we implement DNS over TLS in 2019.\nNew features added in 2018 include:\nQNAME minimization, sponsored by the Open Technology Fund. Mirror Zones, intended to facilitate running a local copy of the root, sponsored by ICANN. Implementation of Serve Stale to provide additional protection against an outage like the massive DDoS that hit Dyn in October 2016, thanks to a patch contributed generously by Akamai. At the end of 2018, we commissioned a BIND 9 project logo from Richard de Ruijter, who also designed the new NLNET Labs project logos. It was nice to be able to afford the small luxury of a professionally-designed logo!\nBIND 9 by the Numbers\nReleases\nWe issued releases in the following branches of BIND 9 in 2018:\n9.9.13 (this branch went to End-of-Life as of July 2018) 9.10.7-S (this branch went to End-of-Life as of July 2018) 9.11.3-S, 9.11.3-S2, 9.11.4, 9.11.4-S, 9.11.5, 9.11.5-S1, 9.11.5-S2, 9.11.5-P1 (released in January 2019) 9.12.1, 9.12.2, 9.12.3, 9.12.3-P1 9.13.0, 9.13.1, 9.13.2, 9.13.3, 9/13.4, 9.13.5 There were six BIND CVEs in 2018.\nIssues Opened/Resolved\nHere are some approximate numbers of BIND issues created and closed in 2018:\n777 issues were created since the migration to GitLab (some were brought over from RT) 508 issues have been closed 269 are still open In our pre-GitLab system, there were 96 bug reports closed/rejected and resolved in 2018, and 62 new issues opened.\nOur THANKS to the following people for their technical contributions to BIND 9 in 2018\nTony Finch with an impressive 16 commits in GitLab, and five in our pre-GitLab system\nPetr Menšík with 12 commits in GitLab, plus one in our pre-GitLab system\nAndreas Hasenack\nBhargava Shastry\nBill Parker\nHåvard Eidnes\nKevin Chen\nMathieu Arnold\nPaul Hoffman\nRoland Gruber\nThomas Jach\nTomas Hozza\nZhaolong Zhang\nWe also love to see comments, suggestions, and even complaints from all our users on the bind-users mailing list and in our gitlab.isc.org BIND project.\nProjects – Kea In 2018 we finally started building a solid base of Kea DHCP support customers, and we began to see more ISC DHCP support customers prepare to migrate to Kea. Our premium hooks packages (sold on our website for US$499) turned out to be a successful experiment that we plan to continue. Our biggest thrill of the year, however, was adding a much-needed new development team member, a senior QA engineer.\nWe migrated Kea to GitLab, adopting gitlab.isc.org as both the source repo and the issue tracker. We disabled issue tracking on the Kea GitHub, which we are keeping as a read-only repository. We are still doing our integration testing in Jenkins (jenkins.isc.org), but we are looking at migrating that to GitLab in the future. Our old Kea Trac site is still available at oldkea.isc.org for reference. ISC participated in the Google Summer of Code (GSOC) program for the first time, mentoring two student developers who contributed a new feature to Kea and released an open source dashboard for Kea. While this was a good experience for ISC, with such a small team it can be hard to spare the resources to mentor and support GSOC students adequately. (https://www.isc.org/blogs/kea-google-summer-of-code-projects/) We started using the ISC Perflab tool to do performance testing of Kea. We released a high-availability feature that removed the last remaining significant obstacle for ISC DHCP users to migrate to Kea. By implementing a less complicated HA feature instead of failover, we were able to support multiple deployment models with one feature (DHCPv4 AND DHCPv6, the standard memfile, AND DB backend options). Using a significant code contribution from Deutsche Telekom, we added a Cassandra database backend. We produced two more premium hook libraries to help manage Kea: the RADIUS integration hook and the client classification hook. Kea 1.5 added YANG model support through integration with the Sysrepo open source configuration datastore. We added a hooks API guide to the documentation. In addition, the Kea team spent many hours working on a thorough revision of the DHCPv6 standards, which were standardized in November, 2018 as:\nRFC 8415 (was draft-ietf-dhc-rfc3315bis) https://datatracker.ietf.org/doc/rfc8415/ Dynamic Host Configuration Protocol for IPv6 (DHCPv6). This was a big project: it took 13 revisions to complete this 150-page RFC, which obsoleted seven existing RFCs.\nProjects – ISC DHCP We put out a new branch of ISC DHCP, 4.4, which we intend to be the last branch of ISC DHCP. The key changes included:\nDynamic DNS additions dhclient improvements Support for dynamic shared libraries (shared with BIND 9) Other than that, our work on ISC DHCP was mostly limited to fixing reported bugs and responding to support customer issues. We are minimizing new development on ISC DHCP as it gradually nears its end of active maintenance.\nWe weren\u0026rsquo;t able to invest much more work into our ISC DHCP to Kea configuration migration tool, which we are currently testing with some of our ISC DHCP support customers. It is expected that the most difficult part of an ISC DHCP configuration to migrate will be the client classification, because ISC DHCP is so flexible. We have been identifying and tracking all the differences between the two projects that potentially could trip up some users as they migrated in a GitLab milestone (https://gitlab.isc.org/isc-projects/kea/-/milestones/6).\nISC DHCP and Kea by the Numbers\nReleases\nWe issued two major feature releases of Kea and published our last planned major branch of ISC DHCP. Kea continues to be a very active project with an emphasis on new feature development. Currently, we produce only new feature versions, and we support only two versions at a time. As our user base grows, we are considering introduction of parallel stable and development branches, with maintenance-only releases on the stable branch.\nNew features added in 2018 included, by release:\nKea 1.4 – High Availability (HA), RADIUS integration, and a supported Cassandra backend. Kea 1.5 – The sysrepo YANG model configuration store, HA improvements, an authoritative flag, global host reservations, and a new client classification hook module. DHCP 4.4.0 \u0026amp; 4.4.1 – the last major branch https://www.isc.org/blogs/isc-dhcp-the-last-branch/ featured improvements in DDNS, the DHCP client, and Dual-stack mixed mode (https://www.isc.org/blogs/using-dual-stack-mixed-mode-dsmm-with-ddns-in-isc-dhcp-4-4/). DHCP 4.1-ESV-R15, 4.1-ESV-R15-P1 DHCP 4.3.6-P1 There were two ISC DHCP CVEs and one Kea CVE in 2018.\nIssues Open/Resolved\nBecause we migrated our issue tracker, and migrated some of the open issues, all of our bug ticket metrics are somewhat messed up, to be honest.\nWe opened 393 new Kea issues in Gitlab, all of them in 2018. Some of these were migrated over from our older Trac system. We closed 295 issues in our old Trac system in 2018, 184 of them were for Kea 1.4 milestones, which was completed before we migrated to GitLab. We closed 181 tickets in the Kea Gitlab instance to date. So, we closed at least 181 + 184 = 365 issues, and probably more. At any rate, it seems that we are closing issues roughly as fast as we are opening them, which is a good indication that we are keeping up.\nOf the 86 new ISC DHCP tickets, 40 (nearly half) were opened as confidential tickets. 21 of the tickets opened in 2018 were resolved or closed in 2018. Overall, we resolved 26 ISC DHCP tickets and closed another four in 2018.\nWe thank the following people for their technical contributions to ISC DHCP and Kea DHCP in 2018\nThe following people submitted patches, reported bugs, or suggested valuable features or changes to ISC DHCP in 2018:\nBill Shirley\nBrad Fitzpatrick\ndgutier-at-cern-dot-ch\nFelix Wilhelm of the Google Security Team\nFernando Soto from BlueCat Networks\nGabriel Valcazar\nPeter Anvin\nIndy, of the Fireball ISO open source project\nJiri Popelka of Red Hat\nNaiming Shen and Enke Chen of Cisco\nPavel Kankovsky\nPavel Zhukov at Redhat\nPeter Lewis\nTim DeNike of Lightspeed Communications\nFor Kea, our top contributor was Razvan Becheriu. Other 2018 contributors included:\nAndrei Pavel\nFranciszek Gorski\nFranek18\nPiotr Strzyewski\nPlyul\nSebastian Schrader\nSunil Mayya (our GSOC student)\nResearch\nIn 2018, our research team worked on the following projects:\nethq, a Linux NIC monitoring tool (https://www.isc.org/blogs/ethq-linux-nic-monitoring-tool/) dnsgen – a packet generator https://www.isc.org/blogs/dnsgen-a-dns-packet-generator/ Updating the ISC Dig app, available on iOS; 2271 copies were downloaded in 2018 (and over 500 of these were from Japan!) The ISC DNS Checker app, available on iOS; 670 copies were downloaded in 2018. It was most popular in the US, Germany, and the UK. Other Random Statistics\nPresentations by ISC: 17 (https://www.isc.org/mission/webinars/)\nISC Staff: 32 in 9 countries and 8 US states\n2018 email traffic on user mailing lists:\nBIND-users Kea-users DHCP-users 1763 602 465 Emails for help or information to info-at-isc: 883.\n","permalink":"/blogs/2018-in-review/","tags":["2018","review"],"title":"2018 in Review"},{"categories":["BIND","DNS"],"contents":"A number of DNS software and service providers have announced that we will all cease implementing DNS resolver workarounds to accommodate DNS authoritative systems that don\u0026rsquo;t follow the Extensions to DNS (EDNS) protocol. Each vendor has pledged to roll out this change in some version of their software by the \u0026ldquo;Flag Day.\u0026rdquo;\nChanges coming in BIND BIND 9.13.4 and later versions remove the EDNS workarounds, and BIND 9.14.0, which will be posted soon after Flag Day, will also remove those workarounds. This change will NOT be back-ported to the 9.11 or earlier branches of BIND. This is because of our policy of not making unnecessary changes to our stable extended support release.\nThe BIND 9 authoritative server has always been EDNS-compliant (since EDNS was specified).\nNon-compliant domains may become unavailable Domains served by DNS servers that are not compliant with the standard will not function reliably when queried by resolvers that have been updated to the post-Flag Day version, and may become unavailable via those updated resolvers.\nIf your company’s DNS zones are served by non-compliant servers, your online presence will slowly degrade or disappear as ISPs and other organizations update their resolvers. When you update your own internal DNS resolvers to versions that don\u0026rsquo;t implement workarounds, some sites and email servers may become unreachable.\nTest your domains Operators of DNS authoritative systems should check their own systems at https://www.dnsflagday.net/ to ensure they are EDNS-compliant. Most common failures are due to firewalls blocking EDNS traffic, or older DNS servers that need to be upgraded.\nIf you are running any supported version of BIND 9, you are compliant already. If the test site isn\u0026rsquo;t working (there has been quite a spike in traffic recently!), you can also run the tests manually, using DIG. See instructions on that at the bottom of this blog. Keep in mind that if you are scanning for potential problems with your firewalls, you need to run the test across that perimeter.\nIf you run the tests from ednscomp.isc.org, and you get failures due to timeouts, consider that possibly your RRL settings on your authoritative servers may be rate-limiting the queries coming from the test tool. ednscomp.isc.org is sending dig queries from 149.20.1.48 and 2001:4f8:1:f::48. Temporarily whitelisting those addresses in your RRL settings could fix the problem. We have also had to protect ednscomp.isc.org from mass scanning, so if you are running many queries, you will also be rate limited by ednscomp.isc.org.\nOperators of resolvers should consult the table below, or their vendor, to determine whether the software they are running has EDNS workarounds or not. If/when workarounds are removed, resolver operators need to be on the lookout for reports that some websites have become unreachable, and should consider EDNS incompatibility to be one of the potential causes when troubleshooting. If that turns out to be the case, the solution is to notify the problem site so that they can remediate the problem.\nAt ISC, we have been reaching out to operators of non-compliant sites for years, urging them to update. Although there are still a few non-compliant sites that will cause problems for the rest of us, the only long-term solution is for the operators of those sites to resolve their compliance problems.\nOpen source DNS software plans The organizations who have agreed to update their software or systems include:\nOrganization Resolver Version for Flag Day Notes ISC BIND 9.14 (stable) to be released at the end of January 2019 – removes resolver workarounds for servers that misbehave when queried with EDNS BIND 9's authoritative systems are compliant with EDNS already. The change in 9.14 removes the workarounds from the BIND resolver. CZ NIC Knot Resolver – newer than the others – already written without most of the workarounds for misbehaving servers. 3.3.0 (soon to be released) has some minor changes\nIn general, anyone already running the latest version (3.2.0) should not notice any significant differences when upgrading to 3.3.0.\nNLNET Labs Unbound versions released after 1st February 2019 (1.8.4, 1.9.0 and newer) will no longer retry without EDNS when no response is received\nUnbound will still accept answers without EDNS, and will still send a query without EDNS when it receives a FORMERR or NOTIMPL answer. (This change will only affect queries that result in a time-out because of EDNS in the query.)\nPowerDNS PowerDNS recursor 4.2 (to be released soon) will be the first one to no longer accommodate non-compliance\nOn the authoritative side, PowerDNS 4.1 is fully compliant; 4.0 has some corner cases that ednscomp notices but that are not a problem in practice – disabling caching removes those edge cases.\nWhy make this change now? Extension Mechanisms for DNS were specified in 1999, with a minor update in 2013, establishing the \u0026ldquo;rules of the road\u0026rdquo; for responding to queries with EDNS options or flags. Despite this, some implementations continue to violate the rules. DNS software developers have tried to solve the problems with the interoperability of the DNS protocol and especially its EDNS extension (RFC 6891 standard) by various workarounds for non-standard behaviors. This is not unlike the way a driver with the right-of-way might hesitate at an intersection before proceeding if there were another driver in the intersection behaving erratically. These workarounds excessively complicate DNS software and are now also negatively impacting the DNS as a whole.\nThe most obvious problems caused by these workarounds are slower responses to DNS queries and the difficulty of deploying new DNS protocol features. Some of these new features (e.g. DNS Cookies) would help reduce DDoS attacks based on DNS protocol abuse.\nAt ISC we have been probing the DNS for years, looking at EDNS compatibility. We have seen significant improvement over the past four years, and we think this is about as good as we will get as long as we have workarounds in place. Below is one of a number of charts available at https://ednscomp.isc.org/compliance/summary.html. If you are interested, look at some of the other charts which show improvements in EDNS awareness and EDNS compatibility over the years. Some of the EDNS compatibility problems noted are more serious than others.\nOur goal is a reliable and properly functioning DNS that cannot be easily attacked.\nResources and References DNS Flag Day website\nBlog on DNS Flag Day\nEarlier blog on EDNS fallback behavior\nBlog on EDNS compliance issue\nPresentation on DNS Flag Day on UKNOF, January 15th, 2019\nKB article – DNS Flag Day, How Will it Impact You?\nKB article – DNS Flag Day, Notes for Authoritative servers\nKB article – EDNS compatibility tests and status codes\nChecking EDNS Compatibility with DIG The ednscomp.isc.org site uses a modified version of DIG to run multiple tests in series. This test version is available for download at https://gitlab.isc.org/isc-projects/DNS-Compliance-Testing.\nYou can also use the dig included in the BIND distribution and run multiple queries yourself to check a suspicious domain. For instructions on how to do this, see this Knowledgebase article.\n","permalink":"/blogs/dns-flag-day/","tags":["DNS Flag Day","BIND"],"title":"DNS Flag Day – February 1, 2019"},{"categories":["BIND","DNS","DHCP","Open Source"],"contents":"We wish all our customers, contributors, donors, and friends all the best for a wonderful holiday season and a very happy 2019. Thank you for your ongoing support of open source software for an open Internet!\n","permalink":"/blogs/happy-holidays-from-isc/","tags":["holidays","2018","ISC"],"title":"Happy holidays from ISC!"},{"categories":["Kea","DHCP"],"contents":"Challenge: How to provision more devices, faster, without increasing errors? Speed, Agility, Accuracy Network administrators are under pressure to streamline provisioning of new devices so they can scale up, deploy new services, or otherwise make the network serve changing business needs faster than ever. The key to achieving this new speed and flexibility is centralized configuration control.\nIn Kea 1.5 (Download, Release notes) we have enabled a standardized provisioning interface (NETCONF), using a configuration database implemented with the Sysrepo configuration datastore. This is new provisioning model is particularly useful when coordination of changes to multiple network elements is required.\nThe Sysrepo engine is an open source (Apache license) datastore that uses YANG datamodels. Kea 1.5 implements ISC YANG models for DHCPv4 and DHCPv6 and integrates with the Sysrepo engine, which provides a northbound interface to a NETCONF configuration control system.\nNETCONF and YANG NETCONF and YANG model support is available for network equipment from popular vendors including Juniper, Huawei and Cisco. When implementing centralized configuration for your Kea DHCP servers, it makes sense to combine that with centralized configuration for the other network elements associated with IP addressing, such as routers, switches, bridges and relays.\nYANG, kind of like a next-generation SNMP MIB, is a structured data model that mirrors the configuration data structure in the network equipment. (YANG is defined in RFC 6020.) A YANG model for a device can be combined with policy information to enable a provisioning system to specify device configuration changes to add or modify a service. YANG modeling has the potential to facilitate vendor-agnostic provisioning to the extent that multiple vendors can support standardized YANG models. A YANG model can import standard modules (such as for an IPv4 interface). Kea 1.5 implements customized data models for DHCPv4 and DHCPv6 servers which do incorporate some standard modules. The DHCPv6 model is adapted from the draft standard IETF DHCPv6 model. The IETF is not currently pursuing standardization of a DHCPv4 YANG model.\nThe NETCONF protocol is defined in RFC6241. NETCONF provides robust support for change control with support for transactions, multiple datastores, configuration change validation, checkpointing, commit, rollback and other features. The combination of NETCONF controls and YANG modeling permits frequent coordinated network reconfiguration, such as may be needed for SDN architectures.\n","permalink":"/blogs/kea-1-5/","tags":["Kea","Releases"],"title":"Kea 1.5 – Centralized configuration control"},{"categories":["BIND"],"contents":"There is a fundamental flaw in the way that some web server operators use the DNS to identify the IP address of the web servers to which they connect. This article attempts to explain and justify this assertion.\nUntil quite recently the website for a domain name would usually be hosted on a server named \u0026ldquo;www\u0026rdquo; on that domain. Sometimes the server wasn\u0026rsquo;t really named that, but a CNAME record sufficed to direct users to the right server, even if that was, strictly speaking, a breach of the semantics of the CNAME record.\nThese days, however, the corporate marketing folks want to publish \u0026ldquo;bare\u0026rdquo; domain names in their URLs. This causes problems if the site is hosted on a third-party CDN because CDNs usually only publish the hostnames and not the IP addresses of their servers; the DNS data model prohibits placing a CNAME at the apex of a domain name (i.e. at the zone cut) because the domain\u0026rsquo;s NS and SOA records have to go there. This is, however, just a corollary of the fact that you can\u0026rsquo;t place a CNAME record alongside any other DNS resource record. If I want to use a CNAME record to direct the web content for research.example.com to a CDN, I can\u0026rsquo;t also place an MX record there.\nBack in 2009, the IAB published RFC 5507 \u0026ldquo;Design Choices When Expanding the DNS\u0026rdquo;. The title is perhaps misleading, but in general it discusses the issue of the different ways that the DNS can be used to map from a service offered by a domain to the relevant information about that service. In summary there are three practical ways that a service identifier can be used in the DNS:\nappend a prefix to the domain name (e.g. SRV underscore records);\nput something service specific in the data (e.g. TXT SPF records); or\nuse a service-specific resource record (RR) type.\nGenerally, the document recommends the latter course. It\u0026rsquo;s far easier to obtain new RR types than it used to be.\nWebsites did #1 via their use of the \u0026ldquo;www\u0026rdquo; prefix but really did so by accident. It wasn\u0026rsquo;t part of any standard, but it was a very widely used convention, and it worked. That \u0026ldquo;www\u0026rdquo; prefix was the service identifier; it was a pretty blatant hint that the A and AAAA records were there for web use. That prefix meant \u0026ldquo;direct me to the web service for this domain.\u0026rdquo; However, when you try to visit a URL without a \u0026ldquo;www\u0026rdquo; prefix, there\u0026rsquo;s no longer a service identifier – your browser treats the domain part of the URL as a hostname but you\u0026rsquo;re still really looking for a service. The tendency towards prefix-less URLs whilst still requiring lookups for the A or AAAA records of the given domain ignores that fundamental distinction.\nMy proposal then, is a type #3 service identifier for web use – the HTTP resource record. Its format looks exactly like a CNAME, but it can co-exist alongside other resource records. Think of it as an \u0026ldquo;MX record for web clients,\u0026rdquo; albeit without the priority field. It is expected (albeit not mandatory) that when asked for an HTTP record, DNS resolvers would return any corresponding A and AAAA records from cache. If the data isn\u0026rsquo;t already in cache, resolvers may go ask for it before returning the HTTP answer.\nAll that said, there are two downsides to my proposal, although the long term result should be a much cleaner architecture that is consistent with RFC 5507 and also solves all of the issues with CNAME.\nThe first downside is that web clients would have to explicitly ask for the HTTP record, and that means client codebase changes. That said, I think the implementation effort would be significantly less than that needed for a DNS over HTTP (DoH) client, and it\u0026rsquo;s a one-time hit.\nThe second downside is that if an HTTP record is returned but the additional A and AAAA records from cache are not, the web client then has to make a second A / AAAA request for the hostname returned by the HTTP record. I expect this to be only a short-term issue, though – most resolver implementers deploy new RR types very quickly after IANA assignment of the code point.\nI know this won\u0026rsquo;t be painless, but I honestly believe that there is no way to fix the apex CNAME issue through magic tricks in recursive or authoritative servers. Doing so puts too much burden on the DNS, and fails to resolve the architectural flaw discussed above. Web clients need to start using an application-specific service identifier in the DNS just like every other Internet service.\n","permalink":"/blogs/dns-and-web-architecture/","tags":["DNS","CNAME"],"title":"An Architectural Perspective on the Interaction between Web Clients and the DNS"},{"categories":["BIND"],"contents":"\nIf you’ve visited ISC\u0026rsquo;s GitLab instance in the last few days, you may have noticed something: BIND 9 has a new logo! In fact, this is the first time that BIND 9 has had a logo at all, despite the fact that the software has been in development since 2000. So why now?\nBIND 9 is ISC’s flagship product and BIND 9 support contracts represent the lion’s share of ISC’s annual revenues. Over the past couple of years we have begun modernizing and refactoring BIND 9’s code to make it less complex and more robust, and that work will continue. In that spirit, we felt that it was time to create a logo that reflected BIND 9’s future.\nWorking with designer Richard de Ruijter, we wanted to come up with something both modern and classic, clean and simple yet evocative of BIND 9’s strengths. We are very happy with the new logo and hope you like it as well. We invite your feedback at marketing@isc.org.\nNeed to download one of ISC\u0026rsquo;s logos? Check this page.\n","permalink":"/blogs/bind-9s-new-logo/","tags":["BIND","logo"],"title":"BIND 9’s new logo!"},{"categories":["DHCP","Kea"],"contents":"Interest in YANG models for standardizing device configuration is exploding at the IETF. At last week\u0026rsquo;s IETF meeting in Bangkok there were many new drafts proposing YANG support for various technologies. This beta version of Kea features our first ISC YANG model support, for DHCPv4 and DHCPv6.\nWe are also integrating with Sysrepo, another open source project, that implements a Netconf system for storing and applying configuration data.\nThe key features in Kea 1.5 are:\nYANG/NETCONF – the ability to store major configuration elements in a YANG model and manipulate it using NETCONF Support for global host reservations (previously each host reservation had to be associated with a specific subnet) Class commands – a premium new hook that allows dynamic changes to be made to client classes without restarting Performance and resiliency improvements to High Availability hook A new congestion control feature to mitigate the effects of heavy DHCP traffic conditions Improvements to the High Availability feature, including paged updates between HA pairs to alleviate timeouts. We are eager to get users to field test this release. In particular, we would like feedback on the usability and functionality of the YANG/Netconf feature, and if anyone is able to test Kea under very heavy loads, feedback on how the congestion control works. In our tests we have found it decreases overall throughput at extremely high loads, but in more moderate overload conditions, it improves our ability to process current requests over stale ones.\nUsers interested in testing the new premium hook for managing client classes, please email to info at isc dot org and ask for a coupon code. This will enable you to download the premium hooks package for free from our website, in exchange for your test feedback.\nWe are planning to issue a second beta at the end of November, and then a final version in mid-December.\nKea 1.5-beta1 release notes\nDownload Kea\n","permalink":"/blogs/kea-1-5-0-is-ready-for-beta-testing/","tags":["Kea","DHCP"],"title":"Kea 1.5.0 is ready for beta testing"},{"categories":["Open Source","Performance","Tools"],"contents":"In my previous blog article, I talked about the need to generate an even spread of traffic across the queues of a modern multi-core NIC to achieve optimal performance.\nThe Intel X710 cards that we use in our performance testing lab distribute the packets to queues based on the value of a hash calculated for each incoming packet based on the source and destination IP addresses and port numbers.\nWhilst this works well for traffic arriving from the internet, in a lab environment with a very limited range of traffic sources we found that the hashing produced a very uneven load on each CPU core.\nOur lab uses the well-known dnsperf package from Nominum to generate streams of queries, but there are quite low limits on the number of unique source ports that we were able to use without causing dnsperf itself to drop in performance. We were able to set the number of clients (and hence source ports) to 28 but no more.\nSince we were also hitting the apparent per-packet limits of what a single instance of dnsperf can generate (approximately 1Mpps) I set out to see what\u0026rsquo;s possible by bypassing the operating system. The result is \u0026lt;a href=\u0026quot;https://github.com/isc-projects/dnsgen\u0026quot;\u0026gt;dnsgen\u0026lt;/a\u0026gt;, and this is now replacing dnsperf as our preferred test application (the Perflab code has also been updated to allow user selection of traffic generation method).\nUnlike dnsperf, it uses AF_PACKET raw sockets and therefore only runs under Linux. This use of raw sockets is what allows for the use of a far larger range of source ports and higher performance than using \u0026ldquo;normal\u0026rdquo; UDP sockets.\nTo reduce CPU load, dnsgen does not attempt to correlate received packets with those it has transmitted. It simply counts those packets that arrive back on the network interface. Is is therefore best used on a network interface that is directly connected to the server under test and not shared with any other services.\nIn normal operation the packet-per second value reported is the peak rolling average of the received packet rate observed during the run. To attempt to find this value, dnsgen starts sending packets at the specified initial sending rate and then measures the rate at which packets are received. The sending rate is then adjusted every 0.1s to be the midpoint between the maximum observed rate so far and the received rate, plus a specified \u0026ldquo;increment\u0026rdquo; rate. Eventually a steady state should be achieved when the difference between the received rate and the transmitted rate is equal to the increment rate, and where that increment represents a small overhead in lost packets. In the alternative \u0026ldquo;ramp\u0026rdquo; mode, packets are transmitted at the specified starting rate with the rate increasing thereafter by the specified increment every 0.1s without regard to the inbound received rate.\nWith this code we\u0026rsquo;re now able to test performance up to the 3-4 Mpps range. The packet generation code itself can actually push 10 Mpps but when handling bi-directional traffic flows the total performance does drop somewhat. The source code repository also includes an \u0026ldquo;echo\u0026rdquo; server that uses raw sockets to receive and return packets.\nNB: This is not an official ISC product release. It\u0026rsquo;s an application developed for internal use that we believe may be of use to other DNS researchers. While the application already meets our current needs there\u0026rsquo;s still plenty of room for improvement, both in performance and features. In particular there\u0026rsquo;s no IPv6 support yet.\nThe project is hosted on our Github at https://github.com/isc-projects/dnsgen and we\u0026rsquo;d welcome feedback and contributions from the community.\n","permalink":"/blogs/dnsgen-a-dns-packet-generator/","tags":["BIND","dnsgen"],"title":"dnsgen – A DNS Packet Generator"},{"categories":["Open Source","Kea"],"contents":"Report from the Kea Hackathon in Gdańsk last week:\nWe spent the first day of the hackathon working on the overall design for the management API for the \u0026ldquo;configuration in the database backend\u0026rdquo; feature coming in Kea 1.5, among other things. The discussion was around whether to overload the current existing subnet management commands, or to create a new set of API commands for managing subnets that are in the database backend. Other engineers working remotely advanced the Netconf and Yang support, also planned for Kea 1.5.\nOn Day 2 we were really happy to have two experienced Systems Administrators from Silesian University, Adam Osuchowski and Piotr Strzyżewski. Adam submitted five Kea issue reports via gitlab.isc.org. Many of them were accompanied with patches. Thanks a lot to Adam and Piotr for detailed issue reports and patches! One issue Adam raised at the hackathon was the decision to make Kea run in the foreground. Pros and cons of being able to run in background were fiercely debated.\nJeff Osborn, ISC\u0026rsquo;s President, joined the group for Day 2. We spent over an hour on Day 2 of the hackathon talking to Adam and Piotr about DHCP management at Silesian and their requirements for management tools.\nWe also had two student participants: Franek Górski (who fixed a Kea bug during the hackathon) and Aleksandra Jereczek. Another highlight was the appearance of ISC\u0026rsquo;s new QA Engineer – Michał Nowikowski, who will be joining us in November.\nA lot of YANG/NETCONF code was reviewed and merged to master. The solution is not ready yet, but we made some progress in the right direction.\nProf. Józef Woźniak of the faculty of the ETI (Electronics, Telecommunication and Informatics) department kicking off the hackathon.\nISC thanks Prof. Jerzy Wtorek (dean of ETI faculty), Prof. Jacek Rak (dept. of Computer Communications), Prof. Józef Woźniak (dept. of Computer Communications), and Dr. Krzysztof Nowicki (dept. of Computer Communications) for their hospitality, their kind offer to host the event, and for encouraging students to participate. Thank you for having us and supporting open source!\nAfter the hackathon, some of the local participants went out to eat. Part of the value of hackathons is in the social interactions.\nDiscussing DHCP and other Internet problems in Gdańsk\n","permalink":"/blogs/kea-hackathon-in-gdansk/","tags":["Kea","hackathon"],"title":"Kea Hackathon in Gdańsk"},{"categories":["Open Source","Tools","Performance"],"contents":"Network Interface Cards (NICs) often maintain multiple queues for transmitted and received packets and the Linux operating system kernel can be tuned to have each queue handled by a specific CPU core. To obtain peak performance from a network service on modern multicore computer systems, it\u0026rsquo;s desirable to ensure a balanced workload on each NIC queue and CPU core.\nThe NIC drivers on Linux typically contain statistics that show the total number of packets and bytes transmitted and received. These can be accessed using the ethtool command, e.g. ethtool -S enp5s0f1. The drivers actually maintain a very large number of statistics, so to get at just the queue statistics you might use something like this:\n$ ethtool -S enp5s0f1 | grep '[rt]x-[0-9]' tx-0.tx_packets: 61122861442 tx-0.tx_bytes: 10359359292618 rx-0.rx_packets: 165894101718 rx-0.rx_bytes: 12865093535024 tx-1.tx_packets: 72709177633 tx-1.tx_bytes: 11351345564539 rx-1.rx_packets: 18640137236 rx-1.rx_bytes: 1462729691538 ... However, it\u0026rsquo;s far from easy to turn these total numbers into a real-time display of your NIC queues.\nTo solve this, I\u0026rsquo;ve written and published a new tool called ethq. On supported NICs, every second it reads the NIC queue statistics and converts the absolute values into bytes per second and packets per second values for each queue. It also generates total Mbps figures. Think of it as top for your NIC!\nHere\u0026rsquo;s what an unbalanced set of queues looks like:\nIn this case the imbalance is primarily caused by there being an insufficient number of traffic sources to correctly exercise the flow hashing algorithm in the NIC which distributes packets to queues based on fields such as the IP source address and UDP source port. The imbalance is further exacerbated by poor CPU core affinity between the CPU cores that are handling each inbound queue and the server daemon threads that subsequently process each packet.\nHere is a much better balanced set of queues:\nIn this case there are many thousands of traffic sources, and the network server (which in this case is just echoing the received packets) is using raw sockets configured using the PACKET_FANOUT_CPU option to keep traffic localised to the CPU core that received it.\nGetting the code\nThe code is on our GitHub page at https://github.com/isc-projects/ethq.\nCompiling the code will require that you have the development headers and libraries for ncurses installed.\nNIC Support\nBecause there\u0026rsquo;s no standard for the format of the names of the per-queue statistics variables (e.g. tx-1.tx_packets), the software currently only supports a limited set of NIC drivers: i40e (e.g. Intel X710), ixgbe (e.g. Intel X540) and vmxnet3 (VMware virtual NICs).\nIf you have a multiqueue NIC that isn\u0026rsquo;t yet supported, please file an issue on the project\u0026rsquo;s Github page including the output of ethtool -i and ethtool -S from your NIC.\n","permalink":"/blogs/ethq-linux-nic-monitoring-tool/","tags":["NIC","network interface"],"title":"ethq – A New Linux NIC Monitoring Tool"},{"categories":["BIND"],"contents":"Note: This blog post is historical. We have a knowledgebase article that provides more up to date information about BIND 9 packages from ISC.\nWe have begun offering some additional binary packages for BIND 9. We already offer binaries for Windows users, which are very popular, and we have been hearing that some users of other operating systems would also like packages from ISC.\nWhy are we doing this? For all open source users\nWe want to make sure that BIND users have access to binaries that include all of ISC\u0026rsquo;s latest bug fixes, the dependencies for key features like DNSTAP, and no other patches or fixes that ISC does not support.\nSome of the distributions do not provide the latest version of BIND in their packages, because of their rules about updating applications. Some of the BIND dependencies, specifically the DNSTAP feature, require software versions that are not up-to-date in the current official CentOS/RedHat packages. For ISC Support Subscribers specifically\nWe would like to offer support subscribers a CentOS image that has no downstream patches that ISC has not created or tested. Eventually we want to provide ISC Support Subscribers with an option for updating directly from a private ISC repository during the Advance Notification period immediately prior to announcing a BIND 9 security vulnerability. We will continue to supply ISC Support Subscribers who receive Advance Security Notifications with security patches or updated tarballs for everyone who wishes to build their own.\nWe also want to provide ISC Support Subscribers who use the -S Supported Preview version of BIND (aka the Subscription Edition) with an executable, since this version is not publicly available via the usual open source package sites. We plan to provide a CentOS package for BIND 9 -S edition soon.\nNew experimental packages: OS Architecture How is this different from the official package? ISC package location Windows 32-bit and 64-bit n/a https://www.isc.org/download CentOS (current versions) i386, x86_64, ppc64le Minimal changes from official ISC releases. Includes DNSTAP copr.fedorainfracloud.org: BIND 9 Extended Support Version (ESV), BIND 9 Stable version, BIND 9 Development version Ubuntu (current versions) Based on the official Debian package, includes downstream patches not from ISC. Includes DNSTAP. Launchpad:BIND 9 Extended Support Version (ESV), BIND 9 Stable version, BIND 9 Development version Fedora (current versions)) i386, x86_64, ppc64le Minimal changes from official ISC releases. Includes DNSTAP copr.fedorainfracloud.org: BIND 9 Extended Support Version (ESV), BIND 9 Stable version, BIND 9 Development version Debian – Ondřej Surý, Director of DNS Engineering at ISC, has joined the official Debian BIND 9 package maintainers\u0026rsquo; team.\nISC Support Subscribers also get: CentOS (current versions)) – packages with upcoming security patches incorporated will be available during the Advance Notification period to ISC support customers.\nWe plan to provide a CentOS version of the BIND-S subscription edition for support customers. It will not be available to the general public.\nThe advantages of using an ISC package are:\nThe BIND 9 code is up to date. This may be particularly important when updating after a security vulnerability is announced, although some OS packagers issue updated packages immediately when we announce a CVE. The BIND 9 version number will match the versions we are publishing, so it will be easier to tell what you are running. (Some distributions change the version number in their packages.) We will include the required libraries to support DNSTAP, which is a popular BIND 9 feature. This is not available currently in the standard RedHat packages. The ISC packages will be supportable by ISC – some of the OS packages include other code that we cannot support. The disadvantages of switching to an ISC package include:\nThe configuration may be different from the package you have been using. You will have to validate that the ISC package works for you. There may be distribution-specific fixes that you rely on that we can\u0026rsquo;t or won\u0026rsquo;t include. If you choose a binary with DNSTAP support, you will have some additional security exposure from the extra non-ISC code included. We cannot provide advance notification of security events for non-ISC code. We need your feedback Our CentOS package applied minimal, conservative configuration options (with the exception of DNSTAP). The configuration will be quite a bit different from the official RHEL and CentOS packages, but of course the BIND 9 version is up-to-date. The Ubuntu and Fedora packages use the same name as the existing Debian package, and are based on the existing Debian package – so they include non-ISC patches. These packages offer minimal changes from the packages you are used to, but with updated BIND 9 versions. These are obviously two very different approaches: do you prefer the \u0026ldquo;reset\u0026rdquo; we are doing with CentOS, or the \u0026ldquo;easy migration path\u0026rdquo; we are following with Debian, Ubuntu, and Fedora?\nPlease share your comments on the bind-users@lists.isc.org mailing list. To report a bug, please open an issue on our BIND 9 GitLab instance at https://gitlab.isc.org/isc-projects/bind9.\nNote: Original supported OS versions replaced with \u0026ldquo;(current versions)\u0026rdquo; on December 18, 2020\n","permalink":"/blogs/bind-9-packages/","tags":["BIND"],"title":"BIND 9 Packages From ISC"},{"categories":["BIND"],"contents":"The major changes in BIND 9.13 are related to code modernization. We have removed a number of workarounds and custom \u0026ldquo;fix-ups\u0026rdquo; for broken, non-compliant and obsolete operating systems. Some of these workarounds add significant complexity, due to the need to watch for and handle exceptions. Most of these workarounds are virtually untestable, which means that over time they become liabilities with no utility for most users.\nEDNS non-compliance Workarounds for servers that misbehave when queried with EDNS have been removed, because these broken servers and the workarounds for their noncompliance cause unnecessary delays, increase code complexity, and prevent deployment of new DNS features. See https://www.dnsflagday.net for further details.\nOperating Systems and Library Support We have removed support for operating systems that do not support the IPv6 api and POSIX threads in the system library.\nBIND may no longer run on some older versions of popular operating systems, and operating systems that are no longer maintained. We don\u0026rsquo;t have the ability to test on HP/UX, AIX, older Solaris or Sparc machines in-house any longer. We are also unwilling to maintain a lot of special code to adjust for modern OS features that are missing on those platforms. We are willing to consider contributed patches to help keep these systems working, if they are still supported by their respective vendors.\nObsolete cryptography We are removing support for algorithms no longer considered secure. We have already removed support for the ECC-GOST algorithm, and we will remove support for the DSA algorithm. We no longer support versions of OpenSSL prior to 1.0.0 and we strongly recommend using supported OpenSSL version 1.0.2, or 1.1.0, or LibreSSL. OpenSSL (or a compatible library) is now required to compile BIND.\nObsolete compilers We are adopting C99 as our minimum coding standard, and have updated some functions that predated C99. We also decided to take advantage of widely available atomic operations support in C11 compilers and remove the older BIND custom code. (We are relying on a shim for atomic operations support in Windows, however.)\nInternationalization We have removed support for IDNA2003 fallbacks and we only support idnA2008 now. This affects BIND tools such as dig and not BIND itself.\nGoing forward, to compile and run BIND successfully, you will need: A C99-compatible compiler with atomic operations support (C11 stdatomic, or __atomic, or __sync builtins with GCC or Clang compilers, and Interlocked functions with MSVC). A cryptography library (either OpenSSL, LibreSSL or an HSM with a PKCS#11 interface for public-key cryptography). A POSIX-compliant system with support for threads. Support for IPv6 in the operating system, even if you aren\u0026rsquo;t using IPv6 currently. (We had extra checks to determine whether IPv6 was available in the system that we have removed. We need to protect against the case where BIND features are enabled that require IPv6 and the operating system doesn\u0026rsquo;t support it.) Feature Changes In addition to this modernization and cleanup, we have made a few small feature changes.\nWe now spread the initial RRSIG expiration times more evenly over the entire working sig-validity-interval when signing a zone in named to smooth out re-signing and transfer loads. This will help signature maintenance for very large signed zones. We have added a new validate-except option which specifies a list of domains beneath which DNSSEC validation should not be performed. This is effectively a long-term Negative Trust Anchor (NTA). ","permalink":"/blogs/bind-9-new-versions/","tags":["BIND"],"title":"BIND 9.13.3"},{"categories":["BIND","DNS"],"contents":"It is often asked, \u0026ldquo;Why can\u0026rsquo;t I have a CNAME at the zone apex?\u0026rdquo; This article explains why you can\u0026rsquo;t do that, and then continues with a discussion of potential alternatives.\nThe use case for wanting a CNAME at a zone apex is typically one where an organization would like to have their web content hosted and provisioned by a specialist third party. Some third parties, rather than giving you an IP address for the hosting provisioning (in which case, they would most likely be using anycast behind the scenes) will instead give you a new target name to which they ask you to alias your web server name in your DNS zone.\nWhat is the zone apex? The zone apex is where the SOA and NS (and often MX) records for a DNS zone are placed. They are DNS records whose name are the same as the zone itself. For example in zone mydomainname.org you might have:\nmydomainname.org. 3600 IN SOA dns1.mydomainname.org. hostmaster.mydomainname.org. 2018073001 7200 3600 24796800 3600 mydomainname.org. 3600 IN NS dns1.mydomainname.org. mydomainname.org. 3600 IN NS dns2.mydomainname.org. mydomainname.org. 3600 IN MX 10 mail.mydomainname.org. The DNS record type CNAME (Canonical Name) is used for rewriting one name in a zone to another different name (which could be in the same zone, or somewhere else). It\u0026rsquo;s described in RFC 1034 thus:\nThe domain system provides such a feature using the canonical name (CNAME) RR. A CNAME RR identifies its owner name as an alias, and specifies the corresponding canonical name in the RDATA section of the RR. If a CNAME RR is present at a node, no other data should be present; this ensures that the data for a canonical name and its aliases cannot be different. This rule also insures that a cached CNAME can be used without checking with an authoritative server for other RR types.\nFor example: www.mydomainname.org. 3600 IN CNAME www.mydomainname.hostingcompany.com If a recursive resolver receives a client query for www.mydomainname.org, it will look up www.mydomainname.org, receive the CNAME response, and understand that it now has to look up www.mydomainname.hostingcompany.com too, and return the answer to that query to the client.\nThe resolver then adds both answers (for www.mydomainname.org and for www.mydomainname.hostingcompany.com) to its cache, in case another client comes to ask the same question.\nMany people (mistakenly) believe that the semantics of a CNAME are that the target domain in the RDATA is an alias for the left-hand side (the owner name). In fact, this is the wrong way round – a CNAME says that the owner name is an alias for the target. The original concept was to permit the addition of \u0026ldquo;also known as\u0026rdquo; names to a zone in DNS, which would then be redirected to the \u0026ldquo;official\u0026rdquo; (or \u0026ldquo;canonical\u0026rdquo;) name.\nOperationally a CNAME record effectively rewrites all other DNS records for that owner name (on the left-hand side) – for all record types – to the target name (the RDATA on the right-hand side).\nISC BIND 9 will therefore not allow you to add a CNAME at the zone apex because this will create a broken zone and cause DNS resolution failures. The interpretation of the CNAME record (per the DNS protocol) makes it nonsensical to have a CNAME coexisting with any other records for that same name.\nNevertheless, it has always been OK to use CNAME for redirection to a web hosting provider where the website is addressed as www.mydomainname.org, but CNAME cannot be used where the website is addressed simply as mydomainname.org because the zone\u0026rsquo;s NS and SOA (and MX if being used) will also need to exist at that same name.\nWhy can\u0026rsquo;t the rules about having a CNAME at the zone apex (or coexisting with other records) be relaxed and reinterpreted? When \u0026ldquo;www\u0026rdquo; served as a \u0026ldquo;service identifier\u0026rdquo; prefix for the host that actually serves the content, using CNAME for redirection to a hosting provider was OK. It worked as required, even though the semantics weren\u0026rsquo;t really correct. (Mail didn\u0026rsquo;t need a prefix because it always had a special type (i.e. MX) that served as the service identifier).\nIn more recent times, along with an increase in web server provisioning, it has also become popular for organizations to expect to have their website accessible simply by their apex DNS domain name – without the \u0026ldquo;www\u0026rdquo; prefix.\nWe cannot change how the special CNAME record is used without changing all of the DNS server implementations in the world at the same time. This is because its meaning and interpretation was strictly defined in the DNS protocol; all current DNS client and server implementations adhere to this specification. Attempting to \u0026ldquo;relax\u0026rdquo; how CNAME is used in authoritative servers without simultaneously changing all DNS resolvers currently in operation would cause name resolution to break (and web and email services to become intermittently unavailable for those organizations implementing \u0026ldquo;relaxed\u0026rdquo; authoritative server solutions.\nAlternatives Having now understood why it is not possible to add a CNAME at the zone apex to \u0026ldquo;make this work,\u0026rdquo; what other alternatives are there? Some suggestions follow:\nWebserver redirection – set up a webserver for mydomainname.org that provides a web redirect to www.mydomainname.org\nPros: This is a very simple solution, and one which leaves the final provisioning in the hands of the web hosting provider.\nCons: This solution requires that a webserver be running for the sole purpose of providing redirection; it will also visibly rewrite the website name in the URL of the client web browser, pre-pending the \u0026ldquo;www\u0026rdquo;.\nAdd A and AAAA records to your zone that are the addresses that the target of the CNAME suggested by the web hosting provider will ultimately resolve to.\nPros: This is a fairly simple workaround to implement.\nCons: If the provider\u0026rsquo;s hosting solution is dynamic, then those addresses may change, possibly quite frequently. You will have to monitor and regularly update those records (although this is something that could be scripted).\nFor a simple zone that provides nothing to visitors other than the website, consider asking your parent zone\u0026rsquo;s administrators to add the CNAME for your names to their zone directly instead of delegating it to you to manage.\nPros: This is a very simple workaround to implement.\nCons: This will not work for an organization that has many more names in its DNS zone. It will also not work if the parent zone\u0026rsquo;s administrators don\u0026rsquo;t accept non-delegated records (in other words, hosting your zone records directly instead of delegating them to you). You also cannot implement this as a solution if the domain is also used for mail.\nALIAS or ANAME DNS (non-standard DNS feature). Some DNS hosting providers and DNS server software providers offer \u0026ldquo;special\u0026rdquo; record types such as ALIAS or ANAME for a hosted zone. These records are never returned directly to clients. Instead, when a client queries for a name where an ALIAS or ANAME record would be applicable, it follows the redirection and synthesises the A or AAAA record for the client before responding.\nPros: This is a simple solution to the problem for administrators.\nCons: This solution is only available if your DNS hosting provider or DNS server software supports it. In addition, the need to perform DNS recursion (prior to synthesizing the query response for the client) increases the query response time, as well as adds the potential for authoritative server problems to be introduced if the necessary recursive resolution encounters delays or problems. Authoritative DNS servers, particularly those that are public-facing, do not normally perform any recursive functions. Some DNS hosting providers only permit ALIAS/ANAME records where the redirection target name is also hosted by themselves.\nFor more information on the ANAME IETF Internet Draft, you can read:\nhttps://tools.ietf.org/html/draft-ietf-dnsop-aname-01\nThere has been extensive discussion about this draft and about other ways to resolve web hosting DNS redirection for the apex on the IETF DNSOP WG mailing list: https://www.ietf.org/mailman/listinfo/dnsop\nUse the services of a web-hosting provider that allows you to configure IP addresses directly rather than using name redirection.\nPros: This is a very simple solution that scales well if your web hosting provider is using anycast provisioning.\nCons: If your chosen web hosting provider does not offer this option, it will not be available to you without changing to another web hosting provider.\nAnother potential solution currently being discussed would add a new DNS resource record type that browsers would look up, that could exist at the apex. This would be an application-specific hostname for http requests (similar to the way MX works).\nPros: This is completely consistent with the DNS design.\nCons: This is not available yet, and would require a browser client update. During a transition to this solution, you would need this to work alongside A and AAAA records, so recursive resolvers would need to send this new record as additional data.\nNote: another failing of any solution in which the authoritative server synthesises, or actively monitors and maintains locally, the resolution of the \u0026ldquo;alias\u0026rdquo; server address (options 2 and 4), is that that any location-dependent A/AAAA records will be correct per the authoritative server\u0026rsquo;s location, not the end-user\u0026rsquo;s recursive server (resolver) location.\nWARNING: \u0026ldquo;Clever\u0026rdquo; provisioning servers that respond with query-dependent CNAME or MX/NS/SOA records will cause broken DNS resolution.\nThere are some \u0026ldquo;engineered\u0026rdquo; solutions available where the servers authoritative for a zone have been made capable of responding with either a CNAME (when queried for A/AAAA and other record types) or the actual contents of the zone at the apex if queried for MX, NS or SOA records. This may appear to be a good operational workaround, but it is not.\nIt is not a good solution because resolvers (following the DNS protocol correctly) will cache what they have learned.\nIf a client\u0026rsquo;s local DNS resolver does not have anything in cache yet and the client queries for www.mydomainname.org MX, the \u0026ldquo;engineered\u0026rdquo; server will provide the expected MX record to the resolver (which will then cache it and return it to the client. So far so good – this time DNS resolution was successful.\nBut, if earlier, another client had asked for www.mydomainname.org A, the \u0026ldquo;engineered\u0026rdquo; server would have provided the CNAME record, and if this was already in cache, then instead of looking up the MX record for the client at www.mydomainname.org, the resolver would instead have followed the cached CNAME redirection and tried to look up www.mydomainname.hostingcompany.com MX.\nThis is most likely to fail and will result in mail delivery and other intermittent problems.\n","permalink":"/blogs/cname-at-the-apex-of-a-zone/","tags":["BIND","CNAME"],"title":"CNAME at the apex of a zone"},{"categories":["BIND","Events"],"contents":"ISC\u0026rsquo;s training partner, Men \u0026amp; Mice, has announced public courses for September through November, in Amsterdam, Geneva, Denver, the San Francisco area, and Zurich.\nISC Support Customers are eligible for discounted or free training.\nThe courses, originally jointly developed by Men \u0026amp; Mice and ISC, are hands-on and have been highly reviewed by past students. In the first half of 2018, Men \u0026amp; Mice taught courses in five countries with students flying in from across time zones and from other continents. Participants came from ccTLDs, national governments, major enterprises, and smaller institutions.\nTo see the full course schedule and to register, visit the Men and Mice website.\n","permalink":"/blogs/dns-bind-training/","tags":["BIND","Training"],"title":"DNS \u0026 BIND Training"},{"categories":["Kea","ISC"],"contents":"The Kea team is participating in the Google Summer of Code (GSOC) for the first time. The GSOC program pays University students to spend their summer working on an open source project. This year, according to Google, the GSOC program is sponsoring 1,264 students from 62 countries working with 206 open source organizations. Students are encouraged to take the lead in ensuring their own success, starting by writing their own project proposals, with goals, a proposed design and task lists.\nWe have two student interns for summer 2018 working on Kea-related open source projects, Sunil K. S. Mayya and Jerin John. The program kicked off with a month-long \u0026ldquo;community bonding\u0026rdquo; period, in which the students got more familiar with the project, joined the mailing lists, and planned their summer work. Starting in mid-May, they began coding, and they are now slightly more than one-third of the way through their projects.\nSunil K. S. Mayya, working on IPv6 Reconfigure Sunil (who goes by Mayya) is working towards his Masters in Electrical and Computer Engineering at the Technical University in Kaiserslautern, Germany. Mayya has quite a bit of experience already, both with programming and with networking in particular, since he worked for Nokia updating networking applications for IPv6 after receiving his Bachelor’s degree in Telecommunication.\nAccording to his application, Mayya is interested in working on DHCPv6 because “it is cool and contributing to the internet society is cooler :).” Mayya proposed to add support for IPv6 Reconfigure as specified in RFC 3315-bis to the Kea DHCP server.\nFrom Mayya’s GSOC proposal: \u0026ldquo;IPv6 reconfiguration through DHCP is a server triggered mechanism for the clients to update their IPv6 addresses and prefixes. The idea is to make the clients renew their configuration by making them send an Information request/Renew message/Rebind message to the server when the server configuration changes.\u0026rdquo; This is a powerful and useful feature, but also complex to implement. Among other things, the reconfigure request requires authentication.\nMayya aspires to become a regular contributor to open source projects and to work on IETF networking standards. He has already thought of several other enhancements he would like to contribute to Kea after he completes his GSOC project. In the fall, Mayya will be working on his Master’s Thesis. He is looking for a research topic in the area of networking. He is also interested in opportunities to present his work on Kea at an open source, IPv6, or networking conference in the fall of 2018.\nKea Reconfiguration project repository on Github (added December, 2018)\nAlso see Sunil Mayya\u0026rsquo;s presentation at the November, 2018 DENOG on DHCP reconfiguration.\nJerin John, developing a graphical dashboard for Kea Jerin John has just completed his Bachelor’s of Technology in Computer Engineering from K. J. Somaiya College of Engineering, University of Mumbai. Jerin’s senior project was a neural learning question and answer system (https://github.com/meshde/EruditeX).\nIn his proposal, Jerin said, “I am truly fascinated by the concept of open source development as it provides new entrants like me the opportunity to contribute to real software applications, where you can see the code you write become part of software that serve a plethora of people. Open source tools have been part of my life from the beginning, be it operating systems like Linux and Android or my favorite programming language Python.\n\u0026ldquo;GSoC would provide me the ideal platform to learn tons of stuff, gain exposure to the software industry and an opportunity to give back to the open source community. Internet Systems Consortium (ISC) was my choice of organization primarily due to its work in supporting the internet infrastructure and also the innovative ideas behind Kea.” After completing his GSOC project, Jerin plans to begin a full-time job as a software developer at Microsoft India.\nJerin is working on adapting the GLASS dashboard, an open source project developed to display status and statistics for ISC DHCP, to work with Kea. Jerin chose to work with GLASS because it was developed by an operator and therefore was a direct expression of real operational requirements for a DHCP dashboard. Also, GLASS is a very young, recently developed project that uses modern methods and libraries. However, GLASS runs on the ISC DHCP server, and Jerin hopes to enable his project, Anterius, to run on a separate machine from the Kea server. Some of the data that GLASS collects from ISC DHCP is not possible to fetch remotely from Kea, so this will be a challenge. GLASS relies on a separate open source tool for monitoring ISC DHCP pool utilization; Jerin had to reproduce that functionality for Kea. Finally, GLASS supports DHCPv4 only, and Jerin is hoping to make his tool support both DHCPv4 and DHCPv6. Because this is a separate software project from the main Kea project, Jerin is also having to create a suite of automated tests for the project. Jerin is working on Github at https://github.com/isc-projects/kea-anterius.\nJerin is hoping to demonstrate his Kea dashboard to Kea-users in a webinar in mid-summer, get some operator requirements input, and perhaps interest some of them in using and contributing further to the project. He would love to have users drop in on his Github repo and leave their comments and requirements.\nKea Anterius (dashboard) project repository on Github\nAt ISC we have debated in past years whether to apply for GSOC, because it could be a challenge for our small team to support student interns, and our software projects are big and complicated. It is hard for any external contributors to make successful contributions, let alone beginners. Working on networking software is inherently complicated; both students have had to set up their own test environments, with simulated networks, a DHCP server and DHCP clients, and/or traffic generators.\nSo far, we have all made it through the first month of coding, and our first evaluation with Google, and both projects are proceeding very well. Supporting these projects does mean some additional work for the core team – we have uncovered additional changes to Kea that the GSOC projects depend on that the students can\u0026rsquo;t implement themselves, and we will have to code review Mayya\u0026rsquo;s submissions to the core Kea project. We have been very lucky to get two very capable, hard-working students as our GSOC interns and they have been able to accomplish a lot with very little help. Both students started by writing requirements documents. Now they both email weekly status and issues reports, which we follow with a weekly video conference call. Jerin will need to get additional feedback from the Kea user community once his dashboard is ready for field testing. This seems like a very worthwhile investment; we are confident that both of these projects will result in software that Kea users will benefit from.\nThe ultimate purpose of the GSOC program is to help train, and welcome, new software developers to engage in open source projects. We are excited to have found two very promising new open source developers through GSOC.\nReferences Google Summer of Code program site\nISC on GSOC\nKea Reconfiguration project repository on Github\nKea Anterius (dashboard) project repository on Github\n","permalink":"/blogs/kea-google-summer-of-code-projects/","tags":["Kea","GSOC","ISC"],"title":"Kea Google Summer of Code Projects"},{"categories":["Kea","ISC","DHCP"],"contents":" Kea 1.4 is ready for download and use. This is a big release, with several significant new features. For users who have adopted Kea because they love the database backend, for lease and host reservations storage, we have added a new database and improved our statistics:\nCassandra Kea has had experimental support for an Apache Cassandra database backend for a while, but the feature hadn\u0026rsquo;t been finished or fully tested. This has changed: the code now supports host reservations and has a great number of new smaller fixes and improvements. It is now both easier to install and much better documented. Thank you to Deutsche Telekom AG for sponsoring this work.\nDatabase statistics One of the frequently requested features was the ability to report accurate statistics in this case. This surprisingly tricky problem was solved for MySQL and PostgreSQL by a new stat_cmds hook library and schema updates. Users also requested the ability to reconnect after the database connection is lost for whatever reason, and this has been implemented. NOTE: you will need to upgrade any existing MySQL and PostgreSQL Kea databases to the new schema versions.\nClient Classification For users who need complex client classification options, it is now possible to specify client classes on a pool level, so you can control who is able to use specific pools, group similar clients together, or even reject clients that don\u0026rsquo;t meet certain class requirements. Class expressions have expanded capabilities. The most popular seems to be a member operator, which determines whether a packet is a member of a given class. Complex boolean logic is available. Ever wanted to do member(foo) and not member(bar)? Now you can.\nNB: Client Classification changes coming\nBoth Kea and ISC DHCP assign options to clients based on a fixed option precedence order. However, the configuration categories are not evaluated in the same order in both servers.\nKea ISC DHCP Host Host Pool Class Subnet Pool Shared network Subnet Class Shared network Global Global In a future version of Kea, probably in Kea 1.5, we would like to adjust the option precedence order for Kea so that it matches the order for ISC DHCP (to simplify configuration for users migrating to Kea). Because this is a potentially disruptive change to existing Kea users, we want to give advance notice that we are planning this. To express your feedback about this pending change, please send your comments to kea-users@lists.isc.org.\nFor users who may have been waiting for some feature equivalent for DHCPv4 failover in Kea, we have added:\nHigh Availability To provide a highly available service, despite server failure, two Kea instances can now be configured to run as a pair.\nTwo modes are supported.\nIn load balancing mode both partners are active and are handling approximately half of the traffic. In case of a failure of either server, the partner is able to take over responding to all traffic directed to both servers.\nIn hot standby mode there is a primary instance handling all traffic and sending updates to its secondary partner. The secondary monitors the health of the primary and is able to take over automatically in case the primary fails.\nSupport for additional backup servers is implemented. The backup server\u0026rsquo;s database is updated as soon as possible after changes are made to the primary server\u0026rsquo;s database, so that it can be used as an almost drop-in replacement in case of catastrophic failures that take out both primary and secondary servers.\nThe solution supports both IPv4 and IPv6 and can work with any backend, including memfile. Note that this is NOT an implementation of the IETF standard DHCPv4 failover (which does not support DHCPv6).\nThis ISC KB article compares the Kea 1.4 HA feature with ISC DHCP\u0026rsquo;s DHCPv4 failover.\n-\u0026gt; High Availability was planned to be a premium feature, but during the beta, we decided instead to release this in the open source, to help more ISC DHCP users migrate to Kea!\nWe have also added a new premium hook library:\nRADIUS Integration Kea can now be integrated with a RADIUS server. Both access and accounting roles are supported. Kea is able to send Access-Request messages and alter its behavior depending on the responses. Specific IP addresses may be assigned (if Framed-IP-Address or Framed-IPv6-Address is received), client can be assigned to specific pool (if Framed-Pool or Framed-IPv6-Pool is received) or denied service altogether (if Access-Reject is received). Kea can also send accounting messages to RADIUS accounting servers. As with other features, this supports both IPv4 and IPv6.\nTime to update! We are actively developing Kea, so we continue to commit to supporting two versions at a time only: With the release of Kea 1.4, we plan to end maintenance of Kea 1.2 (at the end of June 2018). We also have slightly changed the instructions for installing hook libraries, which unfortunately means that your older Kea hook libraries will need to be updated to install with Kea 1.4. Anyone who purchased the Kea 1.3 Premium Hook package will be getting an email with instructions on how to get a free upgrade to the Kea 1.4 Premium package.\nWe have published, and plan to maintain going forward, a Kea Significant Features Matrix showing when we introduced each of the major Kea features.\nDownload Kea\n","permalink":"/blogs/kea-1-4-adds-high-availability-mode/","tags":["Kea","releases"],"title":"Kea 1.4 Adds High-Availability Mode"},{"categories":["BIND","DNS"],"contents":"The IETF standards community regards the possibility of pervasive monitoring as an attack on the Internet and there is strong consensus among IETF participants that we must protect Internet users\u0026rsquo; privacy.\nWe were wondering whether the people who are operating DNS services were feeling a similar sense of urgency, and if they had any significant concerns about obstacles to deployment.\nFrom March 25 – May 4, 2018, we ran a survey, advertised via social media and ISC\u0026rsquo;s Software Downloads page, asking whether people running DNS systems were interested in deploying DNS privacy, and what concerns they had.\nHere are the results of that survey.\nWho Responded? We advertised on social media, but then worried that most of those respondents might be specifically interested in privacy, so we added a survey invite on ISC\u0026rsquo;s Software Downloads page to get participation from people who might not follow ISC on social media. There was no \u0026ldquo;prize\u0026rdquo; offered for completing the survey and no recognition, so the only likely motivation was wanting to help us out.\nWe got 195 responses in total.\n126 were from various social media (we posted on Twitter, ISC\u0026rsquo;s LinkedIn page and a couple of LinkedIn groups, and ISC\u0026rsquo;s Facebook page). We got 64 responses from people who clicked the link on ISC\u0026rsquo;s Downloads page. Another 5 people responded to a solicitation sent to the RIPE DNS working group mailing list. With an open survey like this, there is no real way to ensure that the respondents are representative of the overall operator population. We asked only the minimum of demographic questions to keep the survey very short, to maximize completion of the survey. We did not collect any personally-identifying information. (It\u0026rsquo;s a privacy survey!)\nWhat is your primary involvement with the Internet Infrastructure? (pick one answer) Answer choice Percentage of total respondents Individual consumer, Internet user\n23% Internet Service Provider (access + services)\n18% Educational organization 12% In the business of creating products that leverage the Internet 10% Internet-enabled business 9% Enterprise (not primarily dependent on the Internet) 8% Hosted (cloud) services provider 5% Government office 3% Other* 11% We asked those who selected \u0026ldquo;Other\u0026rdquo; what their role was, and the responses mostly indicated an individual contributor, rather than service operator role. Responses included: consultant, hobbyist, small business, Internet engineer, registrar.\n50 Countries represented Although the largest number of responses came from the United States, 50 countries were represented, including countries in South America, the Middle East, the Caribbean, and Africa. Participation was relatively weaker across Asia, outside of China.\nFindings 70% said that end-user privacy concerns are a very or extremely important factor in decisions about what products or services are offered, and how those services work. Over half of all respondents said privacy concerns had ALREADY impacted the products and services they used in their organization, and between 30 – 40% cited various restrictions on data use because of privacy concerns. Many respondents commented that they already have restrictions imposed by HIPAA (Health Insurance Portability and Accountability Act) or PCI (Payment Card \u0026hellip; something), implying that this is just another compliance requirement. 50% see a very or extremely useful marketing benefit for their company, if they can make end-user privacy claims about their products or services. 11% of respondents have already deployed QNAME minimization. 34% said they would like to or plan to implement QNAME minimization. 19% said they did not want to implement QNAME minimization and the rest were not sure. When we eliminated the responses from Individuals and the \u0026ldquo;Other roles\u0026rdquo; category, QNAME minimization was even more popular, with 9% already implemented and 43% planning to implement. When asked whether QNAME minimization is required under GDPR, 29% thought it might be, and an exactly equal number of people thought it wasn\u0026rsquo;t. The rest weren\u0026rsquo;t sure. 50% of all respondents are very or extremely interested in offering encrypted DNS services. Respondents rated various suggested \u0026ldquo;obstacles\u0026rdquo; to deploying encrypted DNS services. The most often cited significant obstacles were (1) availability of the features in the products and services they use and (2) lack of time and resources to develop and deploy the service. The DNS developer community can add the features, of course, but we have to be aware that for operators, one of their top concerns is not having the time to deploy them. Despite the obstacles to deploying a full DNS privacy service, 70% of respondents would not recommend or select a public hosted DNS privacy service like 1.1.1.1 or 9.9.9.9 for their users. We can speculate as to why that might be, but we did not ask why in this survey. The respondents who appear to be individual contributors (Individual and Other roles) were more accepting of the hosted DNS privacy services than those who apparently operate services for others. For a read-only view of the charts on Survey Monkey, click here.\nConclusions If we develop QNAME minimization in BIND, we can expect that approximately half our users are open to deploying it. That seems like a good level of commitment for a feature that isn\u0026rsquo;t even developed yet. (QNAME minimization is currently in development in BIND. Unbound has already released support for this feature.) Since some respondents are already using QNAME minimization, we can infer that not all respondents are BIND users. As one respondent pointed out to us, DNSdist has QNAME minimization but the PowerDNS Recursor does not.\nInterest in QNAME minimization was somewhat lower among individual contributors than service operators. The reason for this is not obvious from this survey.\nFor privacy advocates, the fact that 50% see a marketing benefit in touting privacy protections is an opportunity. Perhaps we need a \u0026ldquo;DNS Privacy Compliant\u0026rdquo; sticker for services?\nWe might have gotten more useful insights if we had qualified respondents more, selecting only those running recursive services, and asking how many end users their services supported. Since DNS Privacy really only applies to recursive services, the authoritative operators and registrars who answered the survey may have been confused about how to answer and may have obscured the trends among resolver operators.\nThere was no consensus about what GDPR might mean for DNS operators.\nTo find out more about what DNS Privacy means, we recommend the website for the DNS Privacy Project.\nI am willing to share the whole data set with anyone – there is no personally-identifying information in it. If you would like the data dump, please email me at vicky at isc dot org.\n","permalink":"/blogs/do-dns-operators-want-to-deploy-dns-privacy/","tags":["DNS","privacy"],"title":"Do DNS Operators Want to Deploy DNS Privacy?"},{"categories":["Kea","DHCP"],"contents":"We are excited to share our latest Kea release, Kea 1.4. This is the beta test version, posted so users and prospective users can do some testing and give us feedback. We anticipate posting a final version on June 15th, so please get us your feedback in time to make any needed changes.\nNew Features in 1.4 Database improvements Many Kea users report using multiple Kea instances sharing a single database backend, or cluster of databases. One of the frequently requested features was the ability to report accurate statistics in this case. This surprisingly tricky problem was solved for MySQL and PostgreSQL by a new stat_cmds hook library and schema updates. Users also requested the ability to automatically reconnect after the database connection is lost for whatever reason.\nKea has had experimental support for an Apache Cassandra database backend for a while, but the feature hadn\u0026rsquo;t been finished or fully tested. This has changed; the code now supports host reservations and has a great number of new smaller fixes and improvements. It is now both easier to install and much better documented. Thank you to Deutsche Telekom AG for sponsoring this work. We have done some initial performance testing of the Cassandra backend, but this is not a core expertise of ours. If any beta testers have experience with tuning Cassandra for optimal performance, we would welcome this input.\nClassification It is now possible to specify client classes on a pool level, so you can control who is able to use specific pools, group similar clients together, or even reject clients that don\u0026rsquo;t meet certain class requirements. Class expressions have expanded capabilities. The most popular seems to be a member operator, which determines whether a packet is a member of a given class. Complex boolean logic is available. Ever wanted to do member(foo) and not member(bar)? Now you can.\nWe hope this meets the needs ISC DHCP users have for permit/deny type statement. We would love to hear from beta testers if it does – or doesn\u0026rsquo;t – cover all your use cases.\nBug fixes and quality-of-life improvements With 134 tickets closed, 1.4.0 beta is by far the biggest release we\u0026rsquo;ve ever done. We hope it will improve your quality of life!\nWe have also added 2 new features which are dependent on premium hook libraries. Premium hook libraries are a great way to provide financial support for the project, and get some additional functionality in return.\nRADIUS Kea can now be integrated with a RADIUS server, such as FreeRADIUS. Both access and accounting roles are supported. Kea is able to send Access-Request messages and alter its behavior depending on the responses. Specific IP addresses may be assigned (if Framed-IP-Address or Framed-IPv6-Address is received), and clients can be assigned to a specific pool (if Framed-Pool or Framed-IPv6-Pool is received) or denied service altogether (if Access-Reject is received). Kea can also send accounting messages to RADIUS accounting servers. As with other features, this supports both IPv4 and IPv6.\nSome of the database backends may be slow to respond, compared to the expected response times for a DHCP request. To minimize the impact and improve performance, a new Host Cache library provides a way to cache responses from other hosts. This includes negative caching, i.e. the ability to remember that there is no client information in the database. We have included some simple cache management commands.\n(The Host Cache library has been tested only with the RADIUS backend, which is particularly slow because RADIUS was originally designed for dial-up connection control. We would love to hear from beta testers if it is useful for the other database backends. We would also love to get feedback on how the host cache feature works in your network and whether you would like to see more cache management controls.)\nHigh Availability Many users have been accomplishing a kind of high-availability deployment by using a shared database backend. We wanted to make it possible to have high availability, even without using a database backend.\nTwo Kea instances can now be configured to run as a pair. Two modes are supported. In hot standby mode there is a primary instance handling all traffic and sending updates to its secondary partner. The secondary monitors the health of the primary and is able to take over automatically in case the primary fails. In load-balancing mode both partners are active and are each handling approximately half of the traffic. In case of a failure of either server, the partner is able to take over, responding to all traffic directed to both servers. Support for additional backup servers is implemented. The backup server\u0026rsquo;s database is updated as soon as possible after changes are made to the primary server\u0026rsquo;s database, so that it can be used as an almost drop-in replacement in case of catastrophic failures that take out both primary and secondary servers.\nThis solution supports both IPv4 and IPv6 and can work with any backend, including the default memfile. Note that this is NOT an implementation of the IETF standard DHCPv4 failover (which does not support DHCPv6). It is much simpler than the failover feature of ISC DHCP, which permits, for example, load-balancing machines to split the load 30/70 or 60/40 or any other arbitrary way.\nWe have tested the crap out of this feature, so we sure hope you don\u0026rsquo;t find any issues with it. 😉\nTesting premium hooks ISC Kea support customers will receive tickets inviting them to beta test the premium hooks, which are included with the support subscription. If you are interested in testing premium hooks and do not have a Kea support contract, please contact info at isc dot org to apply to beta test. We will give you a 100% discount coupon in exchange for your help in improving the software. Please let us know when you email which premium hook package(s) you want to test.\nLicense Kea 1.4.0-beta is released under the Mozilla Public License, version 2.0.\nThe premium hook libraries are provided in source code form, under the terms of an End User License Agreement (you are not permitted to redistribute).\nDownload the beta and start testing! The Kea 1.4.0-beta source may be downloaded from this website or from our FTP site.\nOther stuff about Kea ISC provides detailed documentation, including installation instructions and usage tutorials in the Kea Administrator Reference Manual. Documentation is included with the installation or via https://kea.isc.org/docs in HTML, plain text, or PDF formats. ISC maintains a public open source code tree on Github and wiki pages with roadmap and issue tracking at kea.isc.org.\nLimitations and known issues with this release can be found at https://gitlab.isc.org/isc-projects/kea/-/wikis/known-issues-list.\nWe\u0026rsquo;d like users of this software to please let us know how it worked for you and what operating system you tested on. Feel free to share your configuration or use case. Also we would like to hear whether the documentation is adequate and accurate (please open tickets for documentation omissions and errors). We want to hear from you even if everything worked.\nSupport Professional support for Kea is available from ISC. We encourage all professional users to consider this option; Kea maintenance is funded with support subscriptions. For more information on ISC\u0026rsquo;s software support, please visit our Support page. Free best-effort support is provided by our user community via a mailing list. Information on all public email lists is available at https://www.isc.org/mailinglists.\nIf you have any comments or questions about working with Kea, please share them on the kea-users list.\nBugs and feature requests may be submitted via our GitLab repository at https://gitlab.isc.org/isc-projects/kea/-/issues.\n","permalink":"/blogs/kea-1-4-beta/","tags":["Kea","beta","release"],"title":"Kea 1.4 beta Adds Cassandra Backend"},{"categories":["ISC","Privacy"],"contents":"This text was updated on 15 November 2021. Please click here to read our full, up-to-date privacy policy.\nLike many responsible organizations with an online presence, we have committed to meet our new obligations under GDPR (General Data Protection Regulations – it\u0026rsquo;s an EU thing). We are a small business with approximately 35 employees and no business activities located in Europe. We are committed to transparency because we care about public trust in ISC; we also don\u0026rsquo;t want to spend a lot of your money on lawyers. We have created a formal written privacy policy based heavily on the one ISOC recently published, and have done our own inventory of personal and technical data we have in our systems.\nWhat follows is an inventory of the data we collect and use, with a link to the formal Statement of Privacy.\nISC\u0026rsquo;s Public Sites ISC operates a number of Sites for public collaboration and information sharing, including but not limited to: public mailing lists, fora, ISC\u0026rsquo;s main website, project wiki pages, source code repositories, software issue trackers, our Knowledgebase, our ftp.isc.org and downloads.isc.org downloads sites, social media sites, and others.\nWe don\u0026rsquo;t use paywalls or require registration to read information on any of our Sites or to download documents or software. We do require registration to submit data to these systems in order to control spam and abuse. We may use your Personal Data submitted during registration to contact you. In some cases, your registration and other personal information provided by you will be visible to other users.\nUsers of our public mailing lists, issue trackers, code repositories, Knowledgebase and other ISC Sites consent to publishing their personal data when they choose to participate actively in those public sites. ISC\u0026rsquo;s mailman service, GitLab, ftp.isc.org, downloads.isc.org, kea.isc.org, bugs.isc.org, www.isc.org, and ednscomp.isc.org are operated by ISC. ISC\u0026rsquo;s Github site, Document360 Knowledgebase, Cloudsmith repository, SendOwl store and BrightTalk webinar replay site are operated by those companies on our behalf, as well as our social media accounts on FaceBook, Twitter, LinkedIN and YouTube.\nISC maintains several databases for tracking software issue reports by users. These databases enable the public to report and view problem reports about ISC open source projects. These include but are not limited to:\nISC-hosted GitLab instances for BIND 9, Kea DHCP, and ISC DHCP (at gitlab.isc.org) ISC-hosted instance of Request Tracker for BIND 9 at bugs.isc.org (no longer used but still available for public browsing) All of these systems may store submitter username, email address, and other contact data, depending on what the user shared (e.g. in the signature field in the email), as well as technical data about their software deployment, network, and issue, and possibly configuration files, log files, and core dumps. We must have this information in order to communicate with the submitter to get more information about the issue and/or verify a solution.\nWe don\u0026rsquo;t want anyone to refrain from submitting issues because of privacy concerns, however, so if you want to submit an issue and don\u0026rsquo;t want your submission to be publicly visible, contact us to request that we hide your personal information. Depending on the site, this may mean that the entire issue is non-public, which defeats the purpose of having a public issue tracker, so please don\u0026rsquo;t do this without a compelling reason.\nISC\u0026rsquo;s Contact Database ISC maintains a contact database for sales and marketing purposes. This database includes contact names, with organization, title, email, phone and fax contact information, contracts, purchase orders and their details, and a log of emails exchanged between the contact and ISC. When you send ISC an inquiry, we log your inquiry including your contact information and the text of your inquiry in our contact database, where we also track our response to you. Your information will be in this database if you have ever contacted ISC asking for help, or a quote for services, either via phone, email, or by visiting our table at a conference. If you have made a purchase from or a donation to ISC, or visited our booth at a conference or industry event, your name and contact information may also be in this database. We use this information to acknowledge your donation and we may use it to solicit future donations. We have also in the past purchased some lists for marketing purposes, which may have also been imported into this database. We use this contact database to stay in touch with users, to sell and to provide the services that support our work. We do occasionally send bulk email to contacts in this marketing database, but that is infrequent, and always includes a clearly visible link to unsubscribe from future mailings. Users who have unsubscribed will be marked as \u0026ldquo;opted out\u0026rdquo; in the database and will not be sent further bulk emails. This database is maintained for us by Salesforce. Per our agreement with Salesforce they are permitted to hold and process this data for ISC\u0026rsquo;s own business use only, and they may not leverage it for any other purpose. ISC does not share or sell the information with any other organization. Access is limited to a small subset of ISC employees who need the information for their business purposes. Webinars and Surveys To register to participate in certain ISC activities (e.g. webinars), you may be asked to register and provide personal information including employer, physical address, and/or email and phone contact information. This information is collected to facilitate followup in case you ask questions on the webinar that we can’t answer on the spot, and may be used in marketing our services to you or fulfilling special offers made to webinar participants. The information may be stored in the webinar conferencing application and/or our customer relationship management (CRM) database. ISC occasionally conducts surveys for marketing and product management purposes. Survey tools generally use cookies and log IP addresses to prevent duplicate responses, or to facilitate completing a partially completed survey at a later time. Depending on the survey collection method, the survey data may permit identification of the email address of the respondent. The information provided in response to the survey is stored along with any respondent-identifying information collected by the survey tool. Webinar registrations and survey data are processed by third parties. We currently use Zoom for conferencing services and SurveyMonkey for survey operations. Access to webinar attendee and survey data is limited to ISC employees. We don\u0026rsquo;t share attendee or survey data, except in anonymized or summary form. The exception to this are webinars or surveys that are explicitly jointly sponsored with another organization; in that case we may share the data related to the webinar or survey with our co-sponsor for their own use only. ISC\u0026rsquo;s Support Customers Subscribers to ISC’s software support services have to provide contact information for multiple (usually four) points of contact. This information is required in order for us to provide the support service. This information generally includes:\nFirst and last (family) name Email address Organization name (employer) Phone number (optional) We use the email address to communicate with support subscribers, to alert them of support ticket updates, to provide notice of new releases, to provide transactional information (such as validation of email address), to send terms and instructions for the support service, and to process forgotten passwords. Very infrequently, we may also use the technical support contact email addresses to survey support users about our support services or product usage, or to provide roadmap updates.\nWe don’t have any regular support process that uses customer telephone numbers, but some subscribers of our support services are allowed to contact us via phone and we might conceivably use the telephone number in an unusual process, e.g. for verifying identity when updating contact information or for password recovery.\nSupport technical contact information is stored in our support ticketing system, which is operated by ISC in the US. (Some technical support contacts may also be included in our main (Zendesk) contact database.) This database also includes a log of customer support tickets opened and our responses to them, and may include core dumps, configuration files, and software logs.\nWe use this information to support customers, and to identify and troubleshoot issues in our software. Access is limited to current ISC employees. We retain this record even after an individual or organization terminates their support relationship with ISC, unless they specifically request we delete it, because it provides us with a valuable technical record.\nOur standard support services agreement includes a non-disclosure agreement (NDA). We don\u0026rsquo;t publicly identify customers or their representatives unless the customer does so first. When we log issues in our publicly visible issue trackers on behalf of support customers, we either make the issue private, or anonymize the support customer\u0026rsquo;s identity.\nDonors to ISC If you have made a donation to ISC, your name and contact information may also be in our contact database, discussed above. We use this information to acknowledge your donation and we may use it to solicit future donations. We normally acknowledge donations of $10 or more on our website. We are happy to refrain from this if you request anonymity. Credit Card Information In the event you choose to make a donation or purchase using your credit card, we will request credit card information, including number, expiration date, billing address, and card security code so your donation can be processed. This information passes directly to ISC’s payment processor, PayPal. ISC neither uses nor stores this information.\nTechnical Data We use Google tags on our Sites to support basic statistics about # of visitors, pages visited, time spent, browsers used and geographic location of our users. We have declined to opt into any of the extended tracking and analysis that Google offers which attempts to enrich this data with other data Google may have about other user behavior or demographics. Many of our online systems have logs, which log IP addresses of connections. We don\u0026rsquo;t do any processing to attempt to associate these addresses with usernames, email addresses, or organizations. We do analyze the logs in aggregated form to determine site usage and software download levels. One exception to this is F-Root: we do share some F-Root data with ICANN and DNS-OARC for research purposes only. Any such traffic logs we share after the GDPR implementation date of May 25, 2018 will be anonymized. Third Parties Who May Store or Process Personal Data We use services for interacting with users, optimizing our websites, and managing our search presence. Each of our contracts with these third parties restricts the use of personal information so it can only be used to provide the services under the contracts. In addition, these companies are required to treat all information and protect it by processes and procedures no less strict than those used by ISC.\nOur current services and/or contractors are:\nSalesforce – hosts ISC\u0026rsquo;s contact database. Access is limited to a few ISC employees. Document360 - hosts ISC\u0026rsquo;s Knowledgebase. Access to articles is open to the public, without registration or login. Github – hosts ISC open source and has Personal Data submitted by users in issues and patch requests as well as technical data Github may collect. Access is public. BrightTalk \u0026amp; YouTube - hosts recordings of past webinars for later viewing Zoom – provides conferencing services and may have data on prior conference attendance. SurveyMonkey – provides survey operations and has data on prior survey responses, which may include technical data such as IP addresses from cookies. SendOwl - provides store operations for Kea premium software sales Shopify - provides store operations for ISC swag store Paypal - processes payments from SendOwl and Shopify for purchases from ISC We can be contacted, about this or anything else, at info@isc.org.\nOur main phone number (which generally goes to voicemail) is +1 650-423-1300. Our business address is PO Box 360, Newmarket, NH 03857.\nLinks ISC\u0026rsquo;s Statement of Privacy\nEUGDPR.org\nWikipedia on GDPR\n","permalink":"/blogs/privacy-faq/","tags":["privacy","data"],"title":"ISC and Your Personal Data"},{"categories":["Kea","DHCP"],"contents":"ISC has a long history of participating in hackathons, starting a long time ago at Gdansk University, and continuing with IETF Hackathons. Every time the goal was similar: spend a day or two playing with a new technology, concept, or idea to see whether it makes sense, understand how to possibly extend Kea to support it properly, and gain some experience. This time the goal was to revisit the concept of YANG models and the NETCONF interface. See our earlier hackathon report from 2016.\nIETF Hackathon in London Hacking software in London: IETF 101 Hackathon\nSince we did some experiments with YANG and NETCONF in an earlier IETF hackathon, the obvious first goal was to revisit what\u0026rsquo;s been going on in the Sysrepo project. They\u0026rsquo;ve been busy for the last two years and the code made a lot of progress. Implementing the software was fast and reasonably easy using their new Docker image. The availability of C++ bindings was a very welcome discovery.\nCompared to our attempts two years ago, another aspect that matured quite significantly was the IETF YANG model for DHCPv6. While several issues were discovered, the overall state of the model is quite good and we were able to extract it in YANG model and load it into sysrepo.\nWe managed to define and load a simple but full-featured configuration that covered DHCPv6 server configuration. It covered the server\u0026rsquo;s duid configuration, quite a few options (including the simple ones, like DNS servers, but also complex ones, like lightweight 4over6 options with containers and v4-v6 bindings). After some tweaks, we managed to load that configuration and make it available via NETCONF interface.\nWith that infrastructure in place we moved on to the major part of the hackathon, which was to get the information extracted and actually used by Kea.\nKea-netconf daemon The problem of retrieving a configuration from NETCONF is complex. First, a new Kea instance that starts for the first time must be able to retrieve its configuration. Second, once the configuration is retrieved and applied, it needs to listen on any changes that may become available and then apply them internally.\nThe diagram above shows the overall architecture of the solution. Sysrepo software is used to store the YANG model, configure based on that model, and provide a remote NETCONF interface. This provides a standard frontend to the Kea instance that can be used remotely.\nWe spent some time listing out the tasks the kea-netconf daemon would have to conduct: connect to sysrepo, retrieve the initial configuration, translate it to JSON syntax that can be understood by Kea servers, then send that configuration to Kea. A significant challenge will be to be able to use callbacks to listen on specific changes and act if specific parts of the whole configuration changes. The tricky part here is to apply just the small part of the configuration that has changed, without reapplying the whole configuration. Think about a case of adding 1001st subnet when you already have 1000 subnets configured.\nThe problem here is how to deal with the complexity of the translation process.\nTranslation process The DHCPv6 server YANG model itself is complex. Its definition takes 18 pages of text and uses many dependencies on other models. Any attempts at converting the whole model in its entirety are clearly a recipe for disaster. Even if we somehow could pull this off, maintaining such a large monolithic code would be a trauma we\u0026rsquo;d have to endure on a daily basis.\nFortunately, the solution to this problem appeared. We came up with the concept of translators. A translator is a self contained piece of code that is able to take a relatively small part of the configuration in YANG syntax and convert it to JSON format that is understandable by Kea. Obviously, Kea must be able to provide the necessary commands to process such chunks, but fortunately many of these commands are already supported: subnet4-list, subnet6-list, subnet4-add, subnet6-add, subnet4-del, subnet4-get, subnet6-get, reservation-add, reservation-get, reservation-del and many more.\nThe prototype framework written during the hackathon is far from being complete, but it laid out a good foundation. It is able to load multiple translators and register callbacks for each of them. With more development effort spent on it, it could become a fully featured solution.\nSo far we developed a prototype for TranslatorNetworkRanges that covers network ranges, or using Kea nomenclature, Subnets and Pools. There is also a skeleton for TranslatorOptions that will one day be able to handle options. TranslatorInterfaces and TranslatorDuid were considered.\nSo where does this lead us?\nLong-term goal: centralized configuration storage Right now, Kea 1.3 is a nice stand-alone server that does its job. But network engineers expect more than a single server. They want multiple instances running in parallel, they want to deploy new servers with a push of a button, they need centralized configuration for huge networks, instantaneous configuration updates and more.\nThe long-term goal for Kea is to provide a solution that will offer the ability to store all of its configuration and runtime state in a database. With this capability available, it will be possible to spawn new Kea instances and simply point them to a database: \u0026ldquo;there\u0026rsquo;s everything you need, now go do your job.\u0026rdquo; We are not there yet and there is a long road ahead, but the ultimate goal is clear.\nThis objective is not going to appear in one step. We need to get there in smaller, more manageable steps. Kea already is able to use REST interface. Extending it to also use NETCONF interface will add to the flexibility.\nTurning this vision into reality ISC already decided to implement the ability to be able to store essential parts of the configuration to databases. Kea 1.3 can store host reservations in a separate backend database. With Kea 1.5, we will extend this to subnets and options. More configuration elements will come in future releases.\nWe have not made a decision about NETCONF yet. We would love to hear from network operators, ISPs, and other interested people about their thoughts about NETCONF. If you are interested, please share your use cases. What the the most important things you\u0026rsquo;d like to do with a NETCONF interface? Finally, are you willing to help funding the development of this work? If you can, share your opinion on the kea-users list. If you prefer to share your thoughts more privately, please send an e-mail to Vicky Risk and Tomek Mrugalski.\nThank you to Razvan Becheriu, Andrei Pavel, Mislav Novakovic, and Luka Perkov from Sartura, Normen Kowalewski, and Ian Farrer. Some of you participated and wrote code, others helped with configuration and various tips or donated other essential pieces of configuration. Thanks!\n","permalink":"/blogs/kea-and-netconf-towards-automated-configurability/","tags":["Kea","events"],"title":"Kea and NETCONF in 2018: Towards Automated Configurability"},{"categories":["BIND","DNS"],"contents":"Extension mechanisms for DNS were standardized in 2013 Despite this, there continue to be non-compliant implementations. DNS software developers have tried to solve the problems with the interoperability of the DNS protocol and especially its EDNS extension (RFC 6891 standard) by various workarounds for non-standard behaviors. However, temporary workarounds are not a long-term solution. These workarounds excessively complicate DNS software and are now also negatively impacting the DNS as a whole. The most obvious problems caused by these workarounds are slower responses to DNS queries and the difficulty of deploying new DNS protocol features. Some of these new features (e.g. DNS Cookies) would help reduce DDoS attacks based on DNS protocol abuse.\nOpen source DNS software developers agree To prevent further deterioration of DNS services, the developers of four major open source DNS software systems have agreed to discontinue support for these non-standard solutions. All new releases of DNS software from CZ.NIC, ISC, NLnetlabs, and PowerDNS after February 1, 2019 will not contain workaround code for non-compliance with EDNS standard RFC 6891.\nTest your domains and servers You can test your domains and authoritative DNS servers using the web application https://ednscomp.isc.org/ednscomp/. A test result with a green message “All Ok” indicates that you are already prepared for the changes and do not need to do anything. If the result of the test is anything else than the green message “All Ok”, please update your DNS software. If you are using the latest version of your server software, please contact its developer and ask for a fix. In this case, we recommend attaching a link to the test result, which contains technical details, to your message.\nNote to DNS software vendors Please note that full EDNS support (RFC 6891) in DNS software is not mandatory.\nIn case you decide not to support EDNS it is mandatory to correctly answer queries with EDNS in accordance with RFC 6891 section 7, i.e. namely to answer with a valid DNS message containing RCODE=FORMERR. Please follow the RFC mentioned above while implementing this. Thank you!\nNon-compliant domains may become unavailable Domains served by DNS servers that, according to the above-mentioned tests, are not compliant with the standard, will not function reliably after February 1, 2019, and may become unavailable.\nWe are aware of the importance of this change and we want to inform as many people as possible. We are going to keep drawing attention to this change, which will begin to apply in less than a year. If you have the ability to spread this information to people who are in charge of networks and DNS servers, we will be glad if you share the link to this blog post. Our goal is a reliable and properly functioning DNS that cannot be easily attacked.\nAdapted (with permission) from a blog post at CZNIC by: Petr Špaček\n","permalink":"/blogs/end-to-bandaids/","tags":["BIND","flag day"],"title":"End to Bandaids for Broken EDNS"},{"categories":["BIND","DNS"],"contents":"The 1H 2018 Schedule for DNS and BIND classes is posted. There will be classes in Amsterdam; Redwood City, CA; New York City; and Zurich.\nThese classes are delivered by Men and Mice, but cover the use of the BIND open source via CLI. They are hands-on and have been very highly reviewed by past students.\nHere are the links to the three (overlapping) courses:\nDNS \u0026amp; BIND Week: https://www.menandmice.com/training/dnsb-w/\nDNS \u0026amp; BIND Fundamentals: https://www.menandmice.com/training/dnsb-f/\nDNS \u0026amp; BIND Advanced: https://www.menandmice.com/training/dnsb-a/\nMain page with course dates: https://www.menandmice.com/training/\n","permalink":"/blogs/2018-bind-and-dns-training-schedule/","tags":["BIND","training"],"title":"2018 BIND and DNS Training Schedule"},{"categories":["DHCP","ISC DHCP"],"contents":"The Problem with DDNS and Dual Stack clients When clients obtain both an IPv4 and an IPv6 address, they generally want their hostnames when added to DNS to be the same, for both the v4 and the v6 address – i.e. they want, for the same name, for there to be both an A and a AAAA record.\nThis is hard to coordinate because:\nThe DDNS updates can come from either the DHCP server(s) or from the client directly (and client updates may or may not be permitted by the DNS server). In ISC DHCP, the DHCPv4 and DHCPv6 servers run as separate daemons. The DHCPv4 and DHCPv6 protocols are very different; the client requests for v4 and v6 addresses will be asynchronous and thus need some sort of signalling mechanism to ensure that: Two clients don\u0026rsquo;t get the same name (one with the A RR and the other with the AAAA). A new client doesn\u0026rsquo;t \u0026ldquo;steal\u0026rdquo; a name already assigned to another client. Dynamically-assigned names don\u0026rsquo;t replace or delete static names. Unused name-address mappings are cleaned up, either when released, or when the same client comes back with a new address. In DHCP 4.3 and earlier we provided two mechanisms for managing DDNS updates via the DHCP server:\nInterim (this uses a TXT RR for each dynamic hostname) and was written before the standards were finalised. Standard (this uses a DHCID RR for each dynamic hostname). (Previously, we also had \u0026ldquo;ad-hoc\u0026rdquo; but this was deprecated in 4.2.0.) ISC DHCP servers can be configured to use either Interim or Standard (see ISC DHCP support for Standard DDNS), but in a dual-stack environment, both need to be using the same mechanism to manage dual-stack client names properly – they are not interoperable. This is due to the interaction of v4 and v6 addresses and the DNS. If you want the DHCP server to insert both an IPv4 and IPv6 address for a given name, then the server must generate the same DHCID or TXT RR for both transactions (and in the case of the ISC DHCP server you must use the standard style – this is because of a bug in the interim style which means that ISC DHCP v4 and v6 servers don\u0026rsquo;t generate matching client IDs).\nWe have tackled this problem in ISC DHCP 4.4, as described below.\nWhat is Dual-Stack Mixed Mode (DSMM)? Dual-Stack Mixed Mode (DSMM) operates under the premise that all DHCP servers supporting one protocol (v4 or v6) will use one DDNS update style (interim or standard) while all servers supporting the \u0026ldquo;other\u0026rdquo; protocol use the \u0026ldquo;other\u0026rdquo; update style. This ensures that the type of guard record (TXT or DHCID) is always the same for a given protocol.\nIt provides a combination of separation and coordination to help DHCP and DNS administrators support an environment in which there are both static and dynamic addresses in DNS, and where the client dual-stack implementations cannot be relied upon to be compliant with current RFCs in how they provide DHCID and request hostname to address mappings. DSMM should prevent the creation of orphaned guard records and unusable dynamic names, but at the expense of removing the guarantees that a client with an IPv4 address and hostname can have the same hostname if it obtains an IPv6 address too, or that two clients can end up sharing the same name – one of them with an IPv4 address and the other with IPv6.\nIn short, when DSMM is enabled, a DHCP server will ignore DNS address and guard records of the \u0026ldquo;other\u0026rdquo; type when deciding if updates can be made, or whether the existing entries are either static or belong to another client and must be left alone. In no case will address or guard records of the \u0026ldquo;other\u0026rdquo; type be altered by a DSMM server.\nLimitations of DSMM Implementing DSMM removes the possibility that a client requesting a name for an IPv4 address will be guaranteed to be able to use the same name for an IPv6 address. It removes the assurance that the IPv4 and IPv4 addresses found in DNS for a specific hostname belong to the same client. DSMM does not apply to updating Reverse DNS (but we anticipate that Reverse DNS will be correctly maintained in most instances). Allowing direct client updates should be avoided.\nChanges in ISC DHCP 4.4 In 4.4, ISC DHCP introduces three new configuration parameters:\nThe ddns-dual-stack-mixed-mode parameter controls whether the server applies Dual-Stack Mixed Mode rules during DDNS conflict resolution. This parameter is off by default, has no effect unless update-conflict-detection is enabled, and may only be specified at the global scope. The ddns-other-guard-is-dynamic parameter controls whether a server running DSMM will consider the presence of the other update style DHCID RR as an indication that a DNS entry may be overwritten. It should only be enabled after careful study as it allows DNS entries that would otherwise be protected as static, to be overwritten in certain cases. This parameter is off by default, has no effect unless ddns-dual-stack-mixed-mode is enabled, and may only be specified at the global scope. The ddns-guard-id-must-match parameter controls whether the client id within a DHCID RR must match that of the DNS update\u0026rsquo;s client to permit DNS entries associated with that DHCID RR to be ovewritten. Proper conflict resolution requires ID matching and should only be disabled after careful consideration. When disabled, it is allows any DNS updater to replace DNS entries that have an associated DHCID RR, regardless of client identity. This parameter is on by default, has no effect unless update-conflict-detection is enabled, and may only be specified at the global scope. Implementing DSMM on your DHCP servers (The steps below assume that your DNS servers are configured to accept updates from your DHCP servers – this article does not cover that part of the set-up.)\nDecide which of your DHCP servers (v4 and v6) is going to use ddns-update-style interim and which will use standard, and configure them accordingly. Add to both servers the new option ddns-dual-stack-mixed-mode, and also make sure that you have update-conflict-detection enabled. On your servers, consider whether you will need to enable ddns-other-guard-is-dynamic. You would normally use this to to permit updates to address records which would otherwise be interpreted as static (no accompanying guard of its type) because a guard of the other type does exist. Under default handling, this would be considered a static entry and the update would not be allowed. Use of ddns-other-guard-is-dynamic essentially states that the presence of the other type of guard indicates that entries for this FQDN of either type are dynamic after all. On your servers, consider whether you will enable ddns-guard-id-must-match. Enabling this option permits dynamic clients to overwrite each other’s entries whilst still protecting static entries. The criteria under which a dynamic entry will be permitted to be updated can be expressed as follows:\nNo records for this FQDN exist. An address record is present and the guard record, optionally matching the client ID (see ddns-guard-id-must-match), is present. An address record is not present but the guard record, optionally matching the client ID (see ddns-guard-id-must-match) is present (recovery from orphaned guard record scenario). Optionally, the only guard record present is the other type of guard record, regardless of whether or not the other type of address record is present (see ddns-other-guard-is-dynamic). Summary In an ideal world\u0026hellip;\nISC DHCPv4 and DHCPv6 servers both use Standard style DDNS. DHCP clients provide the right information to both v4 and v6 DHCP servers. Either there are no direct client DDNS updates, or clients that do this implement the Standard style DDNS fully and correctly. In the real world\u0026hellip;\nClients do not adhere to current DDNS RFCs when communicating directly with DNS servers or via their DHCP server. DHCP clients provide inconsistent DHCID values for v4 and v6. Update failures due to name collisions, absences and other problems lead to orphaned RRsets in DNS, blocking the further use of formerly-dynamic names. Reverse DNS RRsets are not updated consistently. Dual-Stack Mixed Mode, whilst deviating from the RFCs, attempts to provide a workable DDNS production environment that accommodates most broken client behavior until such time as all clients can be trusted or required to be RFC-compliant.\nThis blog post is based on Using Dual-Stack Mixed Mode (DSMM) with DDNS in ISC DHCP 4.4, in ISC\u0026rsquo;s Knowledgebase. The KB article may have updates on this information.\n","permalink":"/blogs/using-dual-stack-mixed-mode-dsmm-with-ddns-in-isc-dhcp-4-4/","tags":["dual-stack","mixed mode","ddns","dhcp","isc dhcp"],"title":"Using Dual-Stack Mixed Mode (DSMM) with DDNS in ISC DHCP 4.4"},{"categories":["BIND","DNS"],"contents":"We will be adding QNAME minimization in the next major version of named.\nQNAME minimization is described in IETF RFC 7816. It is an important component of an overall DNS privacy strategy. This project will make it possible for the operator of a BIND resolver to configure it to minimize unnecessary information leakage. Data leaked through DNS lookups is increasingly stored and analyzed and can be used for surveillance purposes. Even worse, such surveillance does not require the sophistication of a government actor. Data is leaked routinely to every DNS system in the path of every user lookup if QNAME minimization is not in use, so all that is required is that those system operators collect and analyze the data. The information leaked is meta data only, related to the Internet resource the end user is seeking; it could disclose the existence of an email conversation, PGP key lookup of a correspondent, or research on sensitive topics or people.\nThe Open Technology Fund has generously agreed to sponsor this work. The mission of the OTF is to support Internet freedom. QNAME minimization is sponsored because of their focus on \u0026ldquo;Privacy enhancement, including the ability to be free from repressive observation and the option to be anonymous when accessing the internet.\u0026rdquo;\nAnyone wishing to follow along or comment on this enhancement is welcome to join ISC\u0026rsquo;s Gitlab for BIND 9. This is issue #16.\n","permalink":"/blogs/bind-to-add-qname-minimization/","tags":["BIND"],"title":"BIND to Add QNAME Minimization"},{"categories":["BIND","DNS"],"contents":"In July 2017 we opened ISC\u0026rsquo;s RT bug database for public read-only browsing of BIND and ISC DHCP issues. New bugs submitted were still kept confidential by default, but issues that had been triaged, and feature requests, were publicly viewable.\nNow we want to take a further step towards transparency and welcoming community participation in the BIND 9 project. We have migrated to GitLab for our BIND development platform, and it is now open for community use.\nGitLab will make collaborating with ISC easier and more transparent. Community members can create and manage their own accounts. Community developers can submit merge requests directly, and will get author credit tracked in the source repo, (instead of emailing patches to our old bug tracker). GitLab includes an open issue tracker, where all issues are open to public view by default and where community participants can comment on and add to existing issues, as well as creating new ones. We welcome upvoting on feature requests. GitLab includes a wiki space for sharing designs or plans. Because we are hosting GitLab internally at ISC, we can also maintain private issues and private branches, to protect work on confidential issues and suspected security vulnerabilities. BIND users, patch contributors, operating system packagers, tool developers, researchers and anyone in the DNS community are all welcome to access gitlab.isc.org and navigate to the BIND project. To create an account go to gitlab.isc.org and create an account, or link an existing GitHub or GitLab account.\nFurther Details Security issues should still be sent via email (encrypted if possible) to security-officer@isc.org. GitLab supports creation of confidential issues. If you need to report an issue or provide information that cannot be shared publicly, select this checkbox when you submit your issue. ISC Support customers should continue to raise issues with their support team through your own portal or via email to support@isc.org. ISC will continue to protect customer confidentiality while working on support customer issues. ISC\u0026rsquo;s GitLab instance will not accept creation of new issues via email to reduce spam entries. You will be able to send and receive updates via email, however. As of March 5, 2018, we will no longer accept new BIND issues in our old bugs.isc.org database. We will begin redirecting email users ASAP, but will still log their submissions in bugs.isc.org. Existing open BIND issues will be ported selectively to GitLab over the next 2 months. If you have an issue pending in bugs.isc.org that is not actively being worked that you feel should be a priority, please feel free to re-enter it into GitLab to ensure it survives the transition. Note that this change applies only to BIND 9 issues; ISC DHCP is still using bugs.isc.org and Kea is still using kea.isc.org. ","permalink":"/blogs/isc-gitlab-for-bind/","tags":["BIND","GitLab"],"title":"ISC GitLab for BIND"},{"categories":["ISC"],"contents":"Spend your summer working on open source with ISC! We’re excited that ISC has been selected as one of the participating organizations in this year’s Google Summer of Code program. We are proposing a number of worthwhile extensions to the Kea project. Suggested projects will help you develop practical experience with C++, user interface design, REST api design and more. Some of these are relatively easy, others quite challenging. Kea has an active user community that is growing fast that you can tap into for requirements and feedback.\nParticipants have until March 12th to make a proposal. We’d like to help you succeed. If you are interested, please email the development team at kea-dev@lists.isc.org and let us know what project you are thinking of working on. We can point you to examples of existing Kea designs that may help you get started on your proposal.\nSee the ISC overview: https://summerofcode.withgoogle.com/organizations/5884725911093248/\n","permalink":"/blogs/looking-for-gsoc-interns/","tags":["GSOC","summer","intern"],"title":"Looking for GSOC Interns"},{"categories":["Kea","DHCP"],"contents":"Why should you use Kea DHCP? Please read our case study of UNH\u0026rsquo;s Interoperability Lab to find out why one organization relies on Kea for its DHCP needs.\n","permalink":"/blogs/unh-interoperability-lab-case-study/","tags":["UNH","case study","Kea"],"title":"Kea - UNH Interoperability Lab Case Study"},{"categories":["BIND","DNS"],"contents":"BIND 9.12 is out!\nDownload We had to hold onto this over the recent holiday period, while we were waiting for a window to publish a reported vulnerability and circulate patched versions. Finally, now we are able to post the BIND 9.12 final.\n9.12 has new features (NSEC Aggressive use, Serve Stale, Response Policy Service Interface), updates to cryptography and improved performance (dramatic improvement for glue-heavy scenarios). We refactored several of the more complex, error-prone functions and removed obsolete features. Before upgrading a production system, please read the release notes, as we have also changed some default settings.\n","permalink":"/blogs/bind-9-12-0-released/","tags":["BIND","releases"],"title":"BIND 9.12.0 Released"},{"categories":["BIND","Performance","Security"],"contents":"The recently announced Meltdown bug is a serious vulnerability in Intel CPUs that allows malicious programs to indirectly read information held within the kernel of the operating system via a cache-timing side channel attack.\nMitigations for this bug have been released and (in Linux at least) these take the form of changes to the way that kernel memory is addressed whenever a \u0026ldquo;userspace\u0026rdquo; program is executing. This technique is known as Kernel Page Table Isolation (KPTI) – the kernel prevents the bulk of its address space from being accessed when not needed by swapping around the contents of the page table whenever there\u0026rsquo;s a context switch. However, that swapping comes at a price. Some benchmarks have shown performance impacts as high as 30% on heavy I/O workloads.\nThe question then, is what is the impact on a typical BIND 9 server? To test this I installed the latest version of Fedora Core 27 on a pair of 12-core (Xeon E5-2680 v3) Dell R430 servers connected via Intel X710 10 Gbps Ethernet. One of the systems was set up to generate traffic using dnsperf and the other was set up to serve the DNS root zone using the latest development branch of BIND 9.\nRather than swap between different kernel versions (which might introduce other performance effects) I installed the latest kernel (4.14.11-300.fc27.x86_64) and then started a series of tests enabling or disabling KPTI as needed using the nopti boot-time option.\nEach test comprised multiple runs of dnsperf each lasting 30 seconds, with the query rate progressively ramping up between runs from 10k queries per second up to 360k. I then recorded the mean response latency as measured by dnsperf.\nThe first pair of runs (the first done with KPTI, the second without) gave a surprising result – the latency at high query rates was about 8% lower with KPTI than without, when it was expected to be higher! I\u0026rsquo;ve previously blogged and presented on the issues around getting consistent performance measurements with BIND 9 so there was clearly some unknown test variability at play here.\nBIND 9 is a complicated piece of software, with its own multi-threaded task management system. On modern CPUs and with multi-queue network interface cards the task system, the NICs, and the O/S process scheduler all interact in ways that aren\u0026rsquo;t readily controlled. In particular it is extremely difficult (if not impossible) to balance the traffic so that each NIC queue (and hence CPU core) is receiving an identical number of packets. On these 12 core systems I typically find that perhaps 8 of the cores are working equally hard (albeit not at full capacity), with a couple working not so hard and a couple more working overtime.\nThe main feature that affects the balancing of incoming packets to CPU cores is called Receive Side Scaling (RSS). This uses a hash-based algorithm fed with the source and destination IP addresses and port numbers to decide which NIC queue the packet should be added to. On the X710 cards the hashing algorithm is re-seeded with a 52 byte random key at each reboot. Was this, perhaps, the main source of the variability?\nFortunately it\u0026rsquo;s possible to configure your own random seed for the receive flow hashing via the ethtool -X command. I therefore decided to perform two series of tests each using a different fixed random seed (denoted #1 and #2 in the graph). In total I performed 15 separate ramp up tests for each combination of flow hash and KPTI, with a reboot after every five tests. The graph below shows the average of the 15 runs for each of those four combinations:\nWhile (on average) the blue line graphing the combination of flow hash 1 without KPTI is slowest at high packet rates, the other three combinations generally perform the same as each other, albeit with more variability at the highest query rates. If KPTI were the sole cause of the different results, each KPTI measurement would have performed worse than its corresponding non-KPTI measurement with the same hash. Similarly, if the change of hash were the sole cause, we would not have expected the flow hash 2 to be faster than flow hash 1 in non-KPTI cases and slower in KPTI ones.\nMy conclusion therefore is that some other unidentified variable is responsible for the variability shown, and that the effect of that variable (or variables) is greater than that caused by the Meltdown mitigation patch (KPTI) which in turn appears to be relatively insignificant.\nPlease note however that these results are highly specific to our particular test environment. The CPUs in the test lab are quite recent and support the PCID and INVPCID extensions which help mitigate the overhead of swapping the page tables at each context switch. Your mileage may vary considerably, especially on older hardware without PCID support.\nSidenote: In preparing for this benchmark I did also measure the amount of time taken to compile BIND 9 with or without KPTI. A typical compile without KPTI took 269 seconds, and 273 seconds with KPTI. The extra four seconds was split 50:50 between \u0026ldquo;user\u0026rdquo; time and \u0026ldquo;system\u0026rdquo; time.\n","permalink":"/blogs/meltdown/","tags":["BIND","meltdown","security"],"title":"The Performance Effects of Meltdown Bug Mitigation on BIND 9"},{"categories":["ISC","Annual Reports"],"contents":"2017 was a good year for ISC! Keeping an open source project funded, keeping any small non-profit going, is always a challenge, and we are grateful to have succeeded in doing that for another year. We finished 2017 feeling pretty stable and cautiously optimistic about our on-going projects and ISC\u0026rsquo;s future.\nHighlights of 2017 Staff changes: We hired a very capable new BIND developer, Michał Kępień, who came to us from NASK in the spring. Then in September we hired Ondřej Surý from CZNIC as our new Director of DNS Development. Tomek Mrugalski was promoted to Director of DHCP Development, responsible for both Kea and ISC DHCP. Stephen Morris is now leading QA and DNS Research. Fred Baker, member of ISC\u0026rsquo;s BOD for many years, became a part-time contractor and has taken on the role of ISC’s liaison to RSSAC. The BIND refactoring resulted in an incredible 5X performance improvement for glue-heavy authoritative applications. We also removed some obsolete features, ended support for WindowsXP and lwresd and generally ditched some legacy baggage. We implemented Serve Stale, protection against an outage like the massive DDOS that hit Dyn in October, 2016. IANA officially assigned port 953/tcp for BIND\u0026rsquo;s RNDC protocol. We opened ISC\u0026rsquo;s main bug database to guest users, in the process sponsoring some open source work by Best Practical. We published a free Resolver Check IOS application in the Apple Store, open sourced the ISC Perflab test software and published a NODE.js library for BIND\u0026rsquo;s RNDC interface. The DHCP team produced two Kea releases, adding a remote management API and support for shared networks. Our experiment with selling Kea premium hooks on the website was modestly successful, encouraging us to look at other low-cost offerings to sustain our DHCP work. In 2017, for the first time, kea-users mailing list activity outstripped discussion on DHCP-users. We developed ISC DHCP 4.4.0, which we believe will be our last major branch for this mature project, adding the shared library support that the OS packagers have been asking for and updating the DHCP client for the first time in a while. F-Root added 125 new nodes through an innovative new partnership with Cloudflare, improving response times for some users and further increasing resiliency. We instrumented F-Root for the new RSSAC02 statistics and began a rolling refresh of the traditional F-Root node hardware, using our new \u0026ldquo;F single\u0026rdquo; design. We replaced the DNSSEC Look-Aside Validator registry (DLV) with an empty zone, ending that \u0026ldquo;temporary\u0026amp;\u0026rdquo; service after 9 years. Meanwhile, we continued optimizing ISC\u0026rsquo;s extensive network to reduce costs, automating more of the F-Root management, and deployed Kea internally. What DIDN\u0026rsquo;T happen We did some ambitious refactoring of BIND (in 9.12) and DIDN\u0026rsquo;T apparently break anything! We DIDN\u0026rsquo;T find any really bad security bugs in BIND (we published 8 CVEs, but they were generally applicable only to rare configurations). Although the random subdomain attacks and other standard DDOS attacks continued, our users DIDN\u0026rsquo;T see any new types of DNS attacks in 2017. ICANN DIDN\u0026rsquo;T roll the DNS root key. At ISC we spent a lot of time working to make sure that BIND users would not have a service interruption, but in the end ICANN decided to postpone the event, partly because data they had from some root key telemetry we added to BIND a year in advance. Thank you Craigslist, Switch.CH and Alibaba Cloud made generous unsolicited donations to support our mission. Mozilla, Comcast and APNIC underwrote specific open source development projects for everyone\u0026rsquo;s benefit. We are also grateful to our many long-time support subscribers, who include some of the best network operators in the world, and who provide us with a stable funding base to maintain and evolve our open source. To all those who sent us patches or reported issues to us in 2017 – we really value these contributions as well, and we love hearing from you. Our top submitters for 2017 were: Tony Finch for BIND (he submitted and we reviewed and resolved an incredible 22 patches), Jiri Popelka for ISC DHCP, and Andrei Pavel for Kea. We\u0026rsquo;re looking forward to 2018 We have started using a lot more video in our team meetings. We all work remotely and we think it may help us enjoy meetings more, and improve teamwork. We will be looking for least two more new technical staff during 2018, a BIND developer and a Systems/Support Engineer. We are planning to launch a new forum, in part to help engage our more experienced users in creating some best practices advice for newer users. We are encouraged by our BIND refactoring so far and plan to continue the renewal. Right now we are working on the design for a BIND hooks interface, to enable extension modules. We are migrating our working BIND repository to Gitlab after we release 9.12, to make community collaboration easier and more transparent. A recent blog article outlined changes to the BIND release model we are implementing in 2018, including adding rapid development releases off of our working master branch. We are planning to start building our own packages for BIND and to release a Python module for RNDC. Our DHCP programs continue to operate at a loss, but we trust that our persistence will be rewarded in 2018, as we have had a surge of interest from users deploying Kea. The team is fully subscribed working on adding high-availability features and a supported Cassandra backend to Kea, and we are developing a configuration migration tool that will help people switch to Kea from ISC DHCP with minimum effort. 2017 measured ISC Staff (including part-time): 30 Countries ISC staff live and work in: 7 US States ISC staff live and work in: 6 450,000 visitors to ISC’s website (from Google Analytics), down 5% from the prior year (609K sessions) Visitors to ISC Web site by Geography: 18.9% from the US, followed by Japan (8.6%), India (7.4%), China (5.1%), and Germany (4.9%) Visitors to ISC’s Knowledgebase: 223,000 BIND-users messages: 1277 DHCP-users messages: 554 Kea-users messages: 832 Public presentations by ISC: 8 IETF drafts by ISC Standardized: 1 (DHCPv6 Failover Protocol, RFC 8156) IETF drafts by ISC in process: 9 BIND versions released: 46 (including development versions) DHCP versions released: 4 (2 maintenance releases + beta versions) Kea versions released: 4 (Kea 1.2 \u0026 1.3 + beta versions) Most popular download from the ISC web site: BIND 9.10.5-P3, with 193,461 downloads Downloads of BIND versions posted in July 9.11- 25,087, 9.10 – 84,160, 9.9 – 10,249 Downloads of Kea 1.3 (posted in October) 1,381; Kea 1.2 (posted in April 2017) 3,771 (8 months) Downloads of the DIG app from the Apple Store: 4,361 Downloads of the Resolver Check app: 4,154 Downloads from the App store that look suspiciously like a robot: 5,600 BIND Issues Opened: 564 (including 109 features and 35 tests) DHCP Issues Opened: 98 (including 18 features and 1 test) Kea Issues Opened: 399 (208 enhancements and 39 tasks) BIND Issues Resolved: 481 (including 85 features and 25 tests) DHCP Issues Resolved: 103 (including 28 features) Kea Issues Resolved: 249 (99 enhancements, 38 tasks) Community-submitted patches for BIND reviewed and accepted: 42 Community-submitted patches for ISC DHCP reviewed and accepted: 19 Community-submitted patches for Kea reviewed and accepted: 15 ","permalink":"/blogs/2017-retrospective/","tags":["2017","review"],"title":"2017 Retrospective"},{"categories":["DHCP"],"contents":" “If we wait for the moment when everything, absolutely everything is ready, we shall never begin.”\n― Ivan Turgenev\nBeta testers needed We rely on the user community for field testing. Please download the 4.4.0 beta version and report the results of your testing to dhcp-bugs at isc dot org. The final 4.4.0 version will be posted in about a month, depending on the results of beta testing.\nMany users, packagers and OEM developers have already contributed bug reports, patches, and suggestions for this release. We appreciate these community contributions and have tried to list them all in the release notes.\nISC DHCP 4.4.x is the last major branch of ISC DHCP that we expect to produce. The 4.4 branch will be supported indefinitely – as long as there is enough financial support from the user community to cover the costs of ongoing maintenance. We released ISC DHCP 4.3.0 in February, 2014. That version was designated as an Extended Support Version, and we have now supported it for four years, as promised. We will be ending support for the 4.3 branch in July 2018, after allowing 6 months for users to migrate to 4.4.0. We will continue to maintain the older 4.1 branch along with the new 4.4 version, because the 4.1 branch has a much smaller footprint, required for some applications.\nThere are some changes to existing behavior in this version that may not be entirely backwards-compatible. We recommend reading the release notes carefully.\nMajor changes in 4.4.0 There are four major changes in the 4.4.0 branch. Most of these we have been \u0026ldquo;saving up\u0026rdquo; because they are the sort of changes we can only make in a new branch.\n1. Adopt the MPL 2.0 license With this branch, we have changed the open source license for ISC DHCP to the Mozilla Public License, MPL 2.0. Please see https://www.mozilla.org/en-US/MPL/2.0/ to read the MPL 2.0 license terms.\n2. Dynamic DNS additions We added three new server configuration parameters which influence DDNS conflict resolution:\nddns-dual-stack-mixed-mode – alters DNS conflict resolution behavior to mitigate issues with non-compliant clients in dual-stack environments.\nddns-guard-id-must-match – relaxes the DHCID RR client id matching requirement of DNS conflict resolution.\nddns-other-guard-is-dynamic – alters dual-stack mixed-mode behavior to allow unguarded DNS entries to be overwritten in certain cases.\nThe server now honors update-static-leases parameter for static DHCPv6 hosts.\n3. dhclient improvements We\u0026rsquo;ve added three command line parameters to dhclient:\nprefix-len-hint – directs dhclient to use the given length as the prefix length hint when requesting prefixes.\ndecline-wait-time – instructs the client to wait the given number of seconds after declining an IPv4 address before issuing a discover.\naddress-prefix-len – specifies the prefix length passed by dhclient into the client script (via the environment variable ip6_prefixlen) with each IPv6 address. We added this parameter because we have changed the default value from 64 to 128 in order to be compliant with RFC3315bis draft (-09, page 64) and RFC5942, Section 4, point 1.\nWARNING: The new default value of 128 may not be backwardly compatible with your environment. If you are operating without a router, such as between VMs on a host, you may find they cannot see each other with a prefix length of 128. In such cases, you\u0026rsquo;ll need to either provide routing or use the command-line parameter to set the value to 64. Alternatively you may change the default at compile time by setting DHCLIENT_DEFAULT_PREFIX_LEN in includes/site.h.\ndhclient will now generate a DHCPv6 DECLINE message when the client script indicates a DAD failure.\n4. Dynamic shared library support A configure script, configure.ac+lt, which supports libtool is now provided with the source tar ball. This script can be used to configure ISC DHCP to build with libtool and thus use dynamic shared libraries.\nOther highlights:\nThe server now supports dhcp-cache-threshold for DHCPv6 operations. The server now supports DHPv6 address allocation based on EUI-64 DUIDs. Experimental support for alternate relay port is provided in both the server and relay for IPv4, IPv6, and 4o6 (see: draft-ietf-dhc-relay-port-10.txt). ","permalink":"/blogs/isc-dhcp-the-last-branch/","tags":["ISC DHCP"],"title":"ISC DHCP – The Last Branch"},{"categories":["BIND","DNS"],"contents":"We are updating our release model for BIND, starting in 2018. We already shortened our release cycle from 9.10 and 9.11, each of which took \u0026gt;24 months, with 9.12, which we will bring out in less than 18 months. Going forward, we want to produce a new branch every year. We intend to make early (.0 and .1) releases of new branches much more stable than previously, by minimizing changes from the prior stable branch. We will provide frequent new development drops for users who want to try new features We are committed to continuing to provide an extended support version (ESV) for users who are most concerned about stability and want to minimize branch changes. Finally, we need to minimize the total number of supported versions at a given time while providing both fresh new and old stable branch options. What is changing? 1) New development releases\nThe biggest change will be the addition of development releases. Starting as soon as we release 9.12.0, we plan to create frequent releases of the Main branch, naming them 9.13.0, 9.13.1, etc. Thus, we will start issuing new versions as soon as we start development on the branch, and will not wait for a year or more of development to pass, as we have traditionally done. When we near the end of 2018 and the development of the BIND 9.13 version has had a handful of releases, we will renumber it and release it as BIND 9.14. This way, the new BIND version should be much more stable and well-tested than if there was a year\u0026rsquo;s worth of new changes since the prior release.\n2) Annual Stable versions\nWe plan to bring out new branches approximately every 12 months. Releases that are not designated as ESV will be supported for 12 months and then replaced with a new branch. Historically, our .0 and .1 releases were not stable enough for large-scale production use. We believe that by issuing frequent development releases off of our main branch and then renumbering that at the end of a year\u0026rsquo;s development, we can achieve stability with the initial release on the subsequent branch. This change is in line with our recent efforts to shorten the gap between new branches, already begun with 9.12.\nESV and Subscription versions are not changing.\n3) Extended Support Version - Supported for Four Years\nThe Extended Support Version (ESV) is intended for users who update infrequently, or who have a long pre-deployment integration or validation cycle. BIND 9.11 will be our next extended support version, followed by BIND 9.16. Every other Stable version after that will be designated for extended support.\nWe will continue to support ESV versions for 4 years from development but late in the cycle we may only update these branches if there are security vulnerabilities in them - which will minimize churn for those long-stable versions. The only real change in our ESV commitment is, we will indicate which releases will become ESV at the start of the branch, rather than waiting for several maintenance releases.\n4) Subscription Edition\nThe Subscription Edition was created for our support customers who want to enjoy some of our newest BIND features, while running an older stable version. It is also known as our \u0026lsquo;Supported Preview\u0026rsquo; edition because we selectively backport and integrate new features (including unreleased ones) into an old stable version. Because we are willing to incorporate experimental new features into the Subscription Edition, we also may later remove or change some of these features, based on subscriber feedback.\nWhat version should I use? See this updated Knowledgebase article on selecting a BIND version.\nSummary Beginning in 2018, we will make minimal changes to the 9.9 and 9.10 branches in preparation for ending support in mid-2018. BIND 9.11 will become our next extended support version, replacing 9.9. We will overlap ESV and Subscription branches so that users who have an extended in-house test cycle have time to validate or integrate the new ESV before migrating. Branches which are not going to be extended-support versions will be short-lived. We will start a new main branch every year, and will begin releasing development versions on that branch immediately. We are confident that this updated plan will serve the wide range of BIND users well. If you have any feedback or concerns about this please don\u0026rsquo;t hesitate to email me at vicky@isc.org.\nExample Release Plan\nThe example below illustrates the contrast between the long-lived ESV releases and the 12-month stable and development releases. The 9.11 ESV will be even more long-lived than the planned 4 years from release, in order to help us transition to this new model.\nWhat you can\u0026rsquo;t really see from this is, the new stable release will effectively just be a renumbering of the prior year\u0026rsquo;s development release. Once we make the transition, odd-numbered versions (9.13 and above) will be development versions, and even-numbered versions will be stable. We will also note which is which on our downloads page.\nSharp observers might see below that we were considering removing the \u0026ldquo;9\u0026rdquo; from the release numbers, a change we decided not to make at this time.\n","permalink":"/blogs/bind-release-strategy-updated/","tags":["BIND","release"],"title":"BIND Release Strategy Updated"},{"categories":["Kea","DHCP"],"contents":"Since we posted Kea 1.3, the team has been hard at work on plans for Kea 1.4.\nHigh Availability We want to address the need for high availability for DHCP services. ISC DHCP supports DHCPv4 failover, but this is a complicated and fragile protocol, and we are hoping to avoid implementing it in Kea. A number of Kea users have reported on the kea-users mailing list that they have managed to build HA Kea systems using a shared database backend but there is no way, currently, to have a HA system with the higher-performance memfile system.\nWe wrote up a design plan for HA support for Kea that included load balancing between systems using the memfile option.\nRADIUS Integration We have heard from some ISPs that they need RADIUS integration to provide client access control, option provisioning, as well as accounting. One ISP also needs to specify classes of service (in terms of contention for wifi addresses). A significant challenge here is how to minimize the communications with the RADIUS system, particularly for lease renewals. This is being considered as a possible premium hook for Kea. We had a design proposal for RADIUS integration.\nWe welcome feedback on these proposed designs for Kea 1.4 on the kea-dev mailing list at kea-dev@lists.isc.org.\n","permalink":"/blogs/planning-for-kea-1-4/","tags":["Kea","high availability","RADIUS"],"title":"Planning for Kea 1.4"},{"categories":["BIND","DNS"],"contents":"BIND 9.12 development is getting closer to completion! Barring surprises, BIND 9.12.0b2, now available for download, should be the last development beta before the release candidate. If you want to give it a try in time to provide us feedback before our RC code freeze you can find the beta at our Downloads page.\nNew Feature Branch We don\u0026rsquo;t introduce new features or make major changes to minor releases within a BIND 9 branch. New branches are when we make significant changes to BIND. BIND 9.12.0 will be the first release in the 9.12 series and in the release notes you can find information about the new features, improvements, and fixes that are included in 9.12. Key new features include:\nNSEC Aggressive Use - this feature, sponsored by APNIC, will reduce query loads on authoritative servers for signed domains; when existing cached records can be used by the resolver to determine that a name does not exist in the authorittive domain, no query needs to be sent. Reducing the number of iterative queries should also improve resolver performance. Serve Stale - When acting as a recursive resolver, named can now continue returning answers whose TTLs have expired when the authoritative server is under attack and unable to respond. Code for this feature was contributed by Akamai. The DNS Response Policy Service (DNSRPS) API, is a mechanism to allow named to use an external response policy provider. This allows the same types of policy filtering as standard RPZ, but can reduce the workload for named, particularly when using large and frequently updated policy zones. It also enables named to share response policy providers with other DNS implementations such as Unbound. Thanks to Vernon Schryver and Farsight Security for the contribution. Mind the Gap You should also review the \u0026ldquo;Features Removed\u0026rdquo; section as well. Some of these changes could break existing scripts that rely on them. We do make every effort to support backwards compatibility, and we only make this kind of change in a major release. For example, going forward HMAC-MD5 will not be recommended for RNDC keys. For backwards compatibility HMAC-MD5 can still be used, but the default algorithm used by rndc-confgen is now HMAC-SHA256.\nThe release notes can be found at: ftp://ftp.isc.org/isc/bind9/9.12.0b2/\nFeedback Is Critical User input on development releases is an important part of improving our software and we especially appreciate help from those who are willing to test development releases and provide constructive feedback. Thank you especially to those who have provided input during the alpha release period and also to those who will help evaluate the new beta release. Please send feedback to bind-bugs@isc.org.\nThank you!\n","permalink":"/blogs/bind-9-12-almost-ready/","tags":["BIND"],"title":"BIND 9.12 Almost Ready"},{"categories":["Kea","DHCP"],"contents":"Thank You! We would like to recognize and thank Mozilla Open Source Support and the Comcast Innovation Fund for sponsoring ISC\u0026rsquo;s work on Kea 1.3. The MOSS award funded the development of the Kea REST API over the past year, and Comcast provided partial sponsorship for rest of the 1.3 release.\nThe #1 most frequently requested feature on the kea-users mailing list has been shared networks. This is often cited as a blocker for people trying to migrate to Kea from ISC DHCP, and it is the biggest new feature in Kea 1.3. In designing the feature, we asked both Kea and ISC DHCP users to weigh in on the design, and we ended up with a feature that is very similar to the equivalent feature in ISC DHCP.\nShared Networks This feature allows the administrator to group multiple subnets together. Clients in the shared network may be dynamically assigned any address from any of the included subnets. If necessary, you can specify a parameter on the shared network scope and then override its value in the subnet scope, client class, or on a host reservation.\nThis feature is commonly used to pool addresses from multiple subnets when the network has grown and more addresses are needed than are available on a single subnet. Shared networks are also useful for specifying common parameters such as options for multiple subnets. Shared networks are also used in cable networks, where it is useful to have the CPE devices on one subnet, and the customer devices on other subnets for management purposes.\nWe also finished the REST API sponsored by the 2016 Mozilla MOSS Award, and added two new hook libraries that build on this API.\nREST Interface Over HTTPS HTTPS provides authentication, confidentiality and integrity for communications over the REST API. We have tested and provided example config files for securing the REST API using Apache and Nginx. We have also provided example config files for securing client communications using stunnel. The maximum size of control commands and responses via REST API have been expanded, removing the 64K limitation present in Kea 1.2. This makes handling of large configurations possible. With these changes the REST API is now ready for production use. Development of this feature was sponsored by a Mozilla MOSS award.\nTwo New Kea Hook Libraries Lease management via REST API - New API commands enable querying, adding, reporting on current leases, and modifying existing leases while Kea is running. This allows the administrator (or any system that interacts with Kea) to check presence and status of leases and make necessary changes as needed. These commands are implemented in a new Lease Commands hook library included in the main Kea distribution. Development of this feature was sponsored by a Mozilla MOSS award. Subnet management via REST API - Add, remove, and modify subnets in Kea via the API, without resending the entire Kea configuration. This will make managing subnets via the api more feasible for configurations with a large number of subnets or deployments that want to avoid small interruptions when updating the whole configuration. This feature is implemented in the new Subnet Commands hook library, available only in the Kea Subscription package to encourage financial support for the project. New hook point - A new hook point command_processed allows hook libraries to interact with command handling. Existing and new libraries have been updated to use that hook point.\nFeature Parity We added many smaller features, mostly to facilitate migration from ISC DHCP to Kea for more users.\nNew options - This release added support for 21 DHCPv4 and 10 DHCPv6 options. Support for the DHCPv4 vendor specific option (code 43) has been improved, in response to questions raised on the Kea-users mailing list. It is now possible to use vendor-specific syntax for that option. Options in pools - It is now possible to define options in DHCPv4 pools. This additional level gives users an ability to fine-tune options and option values. Conditional expressions - a new expression ifelse is now supported. It allows users to make conditional choices in expressions, e.g. in client classification or flexible identifier. Other bug fixes and small improvements - As usual, we fixed many bugs and did other small improvements. In total 126 tickets (74 in beta and 52 in final) were closed. We thank all the users who have made the extra effort to report technical issues with Kea at kea.isc.org! There are only two remaining significant gaps between ISC DHCP and Kea:\nISC DHCP supports DHCPv4 failover. We plan to support a high-availability deployment model with Kea, using a shared database cluster backend which will work with DHCPv6 as well as DHCPv4. This is not yet implemented, but it at the top of our priority list going forward. Some of our more sophisticated users have managed to build high availability systems already, but we do not provide any documented, tested way to do that currently. There are many scripts, graphical interfaces, and integrations with other systems for ISC DHCP that were developed over the past 21 years by users and other organizations. It will take time to build up a similar \u0026rsquo;ecosystem\u0026rsquo; for Kea. We do have a few such contributions for Kea already, however:\nActive Leases Script A simple script from MGM51.com parses the Kea memfile lease files and produces an easily readable list of current active leases.\nKea \u0026ldquo;Run Script\u0026rdquo; Hook This generic hook will call an external script at any/all of the hook points supported by Kea. Written by Baptiste Jonglez.\nSystemd service unit files These were provided by Jiri Popelka, a RedHat maintainer.\n","permalink":"/blogs/kea-1-3-released-take-it-for-a-spin/","tags":["Kea","new release"],"title":"Kea 1.3 Released – Take It for a Spin!"},{"categories":["BIND","DNS"],"contents":"This afternoon, at the DNS-OARC 27th workshop in San Jose, CA, Dan Mahoney of ISC replaced the contents of dlv.isc.org with an empty signed zone.\nThe DLV was conceived as a temporary transition service, to enable DNSSEC adoption. We have been announcing in various venues since early 2015 that we would be decommissioning the DLV. Finally, we have done it.\nWe will continue to host an empty signed zone at dlv.isc.org, so that resolvers that query the zone won\u0026rsquo;t continue retrying.\nBEFORE TRANSITION AFTER ","permalink":"/blogs/dlv-replaced-with-signed-empty-zone/","tags":["DNS","DNSSEC","DLV"],"title":"DLV Replaced With Signed Empty Zone"},{"categories":["Kea","DHCP","New Releases"],"contents":"Kea version 1.3 is ready for field testing. If beta testing goes well, we plan to release the Kea 1.3 final by the end of October. Please download and send your comments to kea-dev@lists.isc.org, or open tickets in the Kea GitLab instance at https://gitlab.isc.org/isc-projects/kea/-/issues.\nNew Features in 1.3 Shared subnets: Support for shared subnets has been one of the most frequently-requested features on the Kea-users mailing list. We wanted to provide equivalent functionality as the shared subnets support in ISC DHCP, and after consulting with users on both the kea-users and DHCP-users mailing lists, we decided on a configuration design that is similar to ISC DHCP. This feature enables configuration of multiple subnets (more than two) and multiple addresses per subnet (more than two) on a single ethernet interface. This can be useful when the original subnet assigned to the host has no more available addresses and the administrator needs to assign more IP addresses to the host, or a host needs addresses on multiple different subnets. Subnet management via API: Ability to add, remove, and modify subnets in Kea via the API, without resending the entire Kea configuration, using a new hook library. This will make managing subnets via the API more feasible for configurations with a large number of subnets (typical of IPv6). This Subnet Commands library will be offered as a premium option. Lease management via API: New API commands enable querying, adding, reporting on current leases, and modifying existing leases while Kea is running. Use cases for modifying existing leases include migrating clients from one subnet or pool to another by forcing them to get a new address on renewal, and the related use case where some maintenance is planned, and the administrator wishes to gradually quiesce use. This feature is sponsored by the Mozilla MOSS award. Management API security: We have secured the new REST API for Kea using HTTPS to add client-Kea authentication, confidentiality, and integrity. The sample client provided uses HTTPS now. We also have expanded the size of the configuration that can be downloaded via the REST API, removing the 64K limitation in Kea 1.2. With these changes the REST API is now ready for production use! This feature is sponsored by the Mozilla MOSS award. 21 new DHCPv4 options, 10 new DHCP v6 options: We have expanded the list of standard options we support in Kea and have made the formatting for vendor-specific options more flexible. For more details on what is in Kea 1.3 beta, please see the Release Notes.\nWe want to hear about new deployments! If you have deployed Kea successfully, we would love to hear about your experience. Most users only think to post on the kea-users mailing list when they need help, but we are equally interested in hearing about what is working for you. Some things that we and other users would like to know include:\nWhat is your network application? What scale is it? Why did you decide to look at Kea? Was there a problem with your prior solution? What did you observe during your testing? What has been the result of your implementation? How can YOU support Kea development? Kea is free open source, but developing and maintaining this software is not free. ISC employs two dedicated software engineers, plus two engineers shared with other projects, plus a dedicated test engineer for the Kea project.\nYou can help support the Kea project:\nAsk your employer to subscribe for Kea technical support from ISC. Purchase the package of Kea premium hooks available for download from this website. Join the kea-users mailing list and help other users. Develop a Kea hook application and share it with other users on the Kea-contrib github site, or on the kea-users@lists.isc.org mailing list. ","permalink":"/blogs/kea-1-3/","tags":["Kea","DHCP","release"],"title":"Kea 1.3 - Beta Version Posted for Testing"},{"categories":["BIND","DNS"],"contents":"Many of you may already be aware of yesterday\u0026rsquo;s announcement from ICANN concerning the postponement of one of the steps in the currently in-progress root zone KSK rollover.\nSpecifically, they have announced that the 11 October 2017 date that was planned for the retirement of KSK-2010 will be postponed for at least three months because root zone trust anchor telemetry data sent by servers running BIND 9 and other DNS server software indicates that many operators are still unprepared for the change and using soon-to-be-retired trust anchors.\nTo help our users be sure that they are prepared for the transition when it occurs, we have prepared an Operational Notification: KSK-2010 will be retired from the root zone, potentially affecting validating resolvers.\nIf you are operating a server which performs DNSSEC validation, we suggest that you take a few moments to read the notification and follow its suggestions to ensure that you are prepared when ICANN resumes the root KSK rollover.\n","permalink":"/blogs/dns-root-key-rollover-update/","tags":["BIND","DNS","root key"],"title":"DNS Root Key Rollover Update"},{"categories":["ISC","Company News"],"contents":"SWITCH, the thirty-year old foundation that operates .ch and .li (country-code top-level domains for Switzerland), and provides cloud services for Swiss educational organizations, has made a donation of 10,000 Swiss francs to ISC.\nOli Schacher at SWITCH, who has also made technical contributions to BIND, said \u0026ldquo;SWITCH relies heavily on BIND and we appreciate the hard work you put into it!\u0026rdquo;\nJeff Osborn, President of ISC, accepted the contribution and noted that \u0026ldquo;this generous donation reminds us that open source is a community asset. SWITCH is continuing the long tradition of community collaboration that has built the modern Internet.\u0026rdquo;\n","permalink":"/blogs/donation-received-from-switch-ch/","tags":["ISC","donation","SWITCH"],"title":"Donation Received from SWITCH.ch"},{"categories":["BIND","DNS"],"contents":"APNIC has generously offered to sponsor addition of aggressive negative caching, or NSEC Aggressive Use in BIND 9.12.0. They have explained the reasons for supporting this in an APNIC blog posting.\nEarlier research by Geoff Huston, Researcher at APNIC, into the volume of root zone queries had identified that the majority of queries to the root zone were for non-existent domains (NXDOMAIN responses). These queries are unnecessary because a busy resolver already has enough prior negative responses to interpolate these additional negative responses. For example, if the resolver already has a definitive prior response in cache that no zones exist between \u0026ldquo;.abba\u0026rdquo; and \u0026ldquo;.acme\u0026rdquo;, then it is unnecessary to query for \u0026ldquo;.abcd\u0026rdquo;.\nThe benefits of aggressive negative caching are:\nFaster responses to end-users. When a resolver queries the root for information, an end-user is waiting longer than necessary if the resolver could answer the question without making another query. Fewer queries to the DNS root, relieving the root servers of added load as Internet use grows. Finally, this technique provides some protection against a type of denial-of-service attack in which a resolver is pounded with a large number of requests for randomly-generated (non-existent) subdomains, requiring the resolver to make many useless queries. BIND 9.12.0 will synthesize negative answers from cached NXDOMAIN, NODATA, and wildcard responses supplied with NSEC records. The 9.12.0 alpha release is available now, supporting synthesis from NXDOMAIN responses only. The 9.12.0 beta release will add support for synthesizing replies based on NODATA and wildcard responses.\n","permalink":"/blogs/nsec-caching-should-limit-excessive-queries-to-dns-root/","tags":["BIND","DNS","NSEC","caching"],"title":"NSEC Caching Limits Excessive Queries to DNS Root"},{"categories":["BIND","DNS"],"contents":"BIND 9.12.0 is ready for alpha testing. We made some significant changes in this release. There are more changes that will be merged between alpha and beta, but we prioritized the biggest changes for the alpha, to maximize test coverage for them. Please help us test this release, and as usual send bug or test reports to bind9-bugs@isc.org.\nPlease note that we have changed some default settings in this release, which is something we only do with a major version change.\nRefactoring complex functions We prioritized refactoring complex functions over everything else in this release, to improve the maintainability and stability of BIND for the future. We picked three of the biggest, hairiest functions and re-structured them to simplify them. Each of these is large, scored off the charts for McCabe complexity, has been a source of recent CVEs, and is an area where we continue to develop and make changes.\nCode implementing name server query processing has been moved from bin/named to a new library \u0026ldquo;libns\u0026rdquo;. This will make it easier to write unit tests for name server code, or link name server functionality into new tools. We also refactored the resolver.c file and collected all the response policy (RPZ, aka DNS firewall) functions together (they were distributed throughout the code, complicating diagnosis of problems). We explained the BIND refactoring initiative in detail in an earlier blog post.\nCrypto update We want to encourage users to transition to more secure, more modern cryptographic algorithms, while causing as little unexpected damage as possible to users who update their software without paying much attention. In this release we have removed the default for RSA encryption, so the user has to explicitly select an algorithm and key length. This change may break some scripts and may even cause a surprise algorithm change for some people, but it was the gentlest way of migrating we could think of. Since the discovery of the SHA-1 collision, nobody should use SHA-1 when there are many newer, more secure options available.\ndnssec-keygen and dnssec-keymgr will no longer generate RSA keys less than 1024 bits in length, although we will still validate short keys. We have also added support for the new Ed25519 algorithm for DNSSEC, in anticipation of a release of a stable version of OpenSSL that supports it. (Ed448 support is still pending OpenSSL support.)\nRemoving obsolete features In keeping with the theme of renewal, we have removed some legacy features. We announced we are no longer supporting Windows XP. We ended support for the lightweight resolver daemon and library (lwresd and liblwres) and removed them from 9.12: we recommend lwresd users try the getdns api. We removed the dig+sigchase feature, which was replaced with the delv tool in 9.10.\nPerformance A survey of our subscribers in the beginning of 2017 showed that performance was a significant concern. This release includes a number of small changes, and one large one - the addition of a special cache for glue records - that together improve authoritative performance noticeably. Performance of glue-heavy applications such as TLDs is improved by as much as 500% (yes, that is five times faster!). minimal-responses is now set to yes by default, which also contributes significantly to improved performance.\nBIND is still not going to be the fastest DNS system in a race, but now that we have a good performance testing system, we are able to easily validate improvements and avoid performance regressions.\nNew features There are not many new features in this release because of the focus on renewing the code. However, because DDoS attacks against DNS systems were growing in scale during 2016, we also wanted to update the resolver\u0026rsquo;s ability to maintain service during an effective DDoS of either the DNS root, a top-level domain, or another frequently-consulted popular authoritative zone.\nWe added NSEC Aggressive Use, supported by funding from APNIC, to enable the resolver to construct an answer for a root zone query from information already obtained in previous queries. In this alpha we are constructing answers from prior NXDOMAIN responses; we have still to add synthesizing answers from NODATA responses and support for wildcards.\nAkamai contributed a patch, written by a former member of the BIND 9 development team, that implements Serve Stale. Serve Stale returns a stale answer when a fresh answer is unavailable due to an unresponsive authority. Akamai has been using a version of BIND with this patch internally for several years with good results. A similar approach enabled some public resolvers to continue to offer access to popular sites like Twitter during the October 2016 massive DDoS against Dyn and we wanted BIND to have the same ability.\nCommunity contributions As always, we had a lot of help from our friends. This release will include a number of bug fixes that would never have happened without the active support of BIND users who reported problems or contributed patches. We will publish the list of technical contributors with the final, because there are more patches in review now.\nWhat\u0026rsquo;s next? While this release is in alpha testing, the BIND 9 development team will continue work on the NSEC aggressive use feature (adding support for wildcards and synthesis from NODATA responses), add unit tests for refactored code, review and commit submitted patches, and investigate the most severe outstanding bug reports. If there is time, we also hope to replace our Windows installer, do a bit more work on TCP pipelining, and address a few longstanding customer requests. Of course, we will also be looking for and addressing bug reports from alpha testing.\nWe plan to issue a beta in October and to release a final before the end of 2017.\nDownload 9.12.0 alpha and start testing!\nFor more information - see the Release Notes for 9.12.0 alpha\n","permalink":"/blogs/bind-9-12-0-alpha/","tags":["BIND","DNS","new release"],"title":"BIND 9.12.0 Alpha"},{"categories":["DHCP","Kea"],"contents":"There has been much debate on various technical mailing lists about what the default prefix size should be for DHCPv6 prefix delegation. This post is not about what the prefix size should be, but about how you enable your DHCP server to provide an appropriate length prefix based on a \u0026ldquo;hint\u0026rdquo; from the client.\nBackground The DHCPv6 protocol (including RFC 3315bis) allows a client to send an empty prefix with just a prefix length as a hint for the server specifying the desired prefix length.\nAccording to RFC 3633, DHCPv6 clients may specify preferences when soliciting prefixes by including an IA_PD Prefix option within the IA_PD option. Among the preferences that may be conveyed is the \u0026ldquo;prefix-length\u0026rdquo;. The RFC states that servers \u0026ldquo;MAY choose to use the information\u0026hellip;to select prefix(es)\u0026rdquo; but does not specify any particular rules for doing so. The prefix-length-mode statement can be used to set the prefix selection rules employed by the server, when clients send a non-zero prefix-length value. The mode parameter must be one of:\nignore - The requested length is ignored. The server will offer the first available prefix. prefer - The server will offer the first available prefix with the same length as the requested length. If none are found then it will offer the first available prefix of any length. exact - The server will offer the first available prefix with the same length as the requested length. If none are found, it will return a status indicating no prefixes available. minimum - The server will offer the first available prefix with the same length as the requested length. If none are found, it will return the first available prefix whose length is greater than (e.g. longer than) the requested value. If none of those are found, it will return a status indicating no prefixes available. For example, if the client requests a length of /60, and the server has available prefixes of lengths /56 and /64, it will offer a prefix of length /64. maximum - The server will offer the first available prefix with the same length as the requested length. If none are found, it will return the first available prefix whose length is less than (e.g. shorter than) the requested value. If none of those are found, it will return a status indicating no prefixes available. For example, if the client requests a length of /60, and the server has available prefixes of lengths /56 and /64, it will offer a prefix of length /56. In general, \u0026ldquo;first available\u0026rdquo; is determined by the order in which pools are defined in the server\u0026rsquo;s configuration. For example, if a subnet is defined with three prefix pools A, B, and C:\nsubnet 3000::/64 { # pool A pool6 { : } # pool B pool6 { : } # pool C pool6 { : }} then the pools will be checked in the order A, B, C. For modes prefer, minimum, and maximum, this may mean checking the pools in that order twice. A first pass through is made looking for an available prefix of exactly the preferred length. If none are found, then a second pass is performed starting with pool A but with appropriately adjusted length criteria.\nISC DHCP support Support for this feature was implemented in ISC DHCP 4.3.1.\nThe directive is prefix-length-mode and the default is \u0026ldquo;exact\u0026rdquo;. The \u0026ldquo;exact\u0026rdquo; value was chosen as the default for backwards compatibility; it preserves the prior behavior.\nThis means that if the client requests a prefix length and that exact length is supported, it is granted. If the server is not configured to specify that exact prefix length, NoPrefixAvail is sent back.\nEXAMPLE:\nIf the network is configured to give /64 by default with \u0026ldquo;prefer\u0026rdquo; and the client sends a /60 hint, it gets a /60. If the network is configured with the default \u0026ldquo;exact\u0026rdquo; behavior, the home gateway may not get any prefix if the network is only offering /64.\nThe \u0026ldquo;exact\u0026rdquo; behavior is difficult to use successfully, because client has to implement some non-trivial logic to retry without the hint if it receives a \u0026ldquo;NoPrefixAvail\u0026rdquo; response.\nChange in default planned RFC 8168 was published after the ISC DHCP 4.3.5 release. It clarifies what the server should do when a hint is received. \u0026ldquo;prefer\u0026rdquo; is not exactly what the RFC suggests, but it is closer than \u0026ldquo;exact\u0026rdquo;.\nThe \u0026ldquo;prefer\u0026rdquo; setting seems better than \u0026ldquo;exact\u0026rdquo; because the server will do its best to honor the hint, but will also give something if the hint cannot be honored. We will be changing ISC DHCP\u0026rsquo;s default behavior to \u0026ldquo;prefer\u0026rdquo; in a future release. (Planned for ISC DHCP 4.4.0)\nBugs found in ISC DHCP DHCP is a rich environment for corner cases.\nOne issue addressed in ISC DHCP 4.3.6 was documented in the change log:\n\u0026ldquo;The server now checks both the address and length of a prefix delegation when attempting to match it to a prefix pool. This ensures the server responds properly when pool configurations change such that once valid, \u0026ldquo;in-pool\u0026rdquo; delegations are now treated as being invalid. During lease-file loading at startup, the server will discard any PD leases that are deemed \u0026ldquo;out-of-pool\u0026rdquo; either by address or mis-matched prefix length.\nClients seeking to renew or rebind such leases will get a response of \u0026ldquo;No Binding\u0026rdquo; in the case of the former, and the prefix delegation with lifetimes set to zero in the case of the latter.\nThanks to Mark Nejedlo at TDS Telecom for reporting this issue.\n[ISC-Bugs #35378]\u0026rdquo;\nAnother issue was recently reported when the client changes prefix size after obtaining a lease. This issue, found in ISC DHCP 4.3.6, was discussed on dhcp-users and then a bug was opened in ISC\u0026rsquo;s bug tracker. When a client released a /64 and requested a /60, ISC DHCP was returning the prior /64 lease. We plan to fix this issue in our next ISC DHCP release, 4.4.0.\nKea has not yet implemented prefix-length-mode Kea supports prefix delegation, but currently, will ignore the prefix-length hint from a client. In Kea, subnet selection is based on network topology but prefix delegation is not necessarily related to subnet selection. As long as the administrator has configured some prefixes in pools, clients will be assigned the next available prefix in order until the pool is exhausted. At that point Kea will begin assigning prefixes from the next pool. We hope to implement prefix-length hint support in some future Kea release (this is not currently a high priority).\nSee RFC7934 and the discussion it created here: https://mailarchive.ietf.org/arch/msg/v6ops/5x3eUTkXR0lAdxoBf_ktUqhWgTE (74 posts so far) draft-pioxfolks-6man-pio-exclusive-bit-02 ","permalink":"/blogs/dhcpv6-prefix-length-mode/","tags":["ISC DHCP","Kea","DHCPv6"],"title":"DHCPv6 Prefix Length Mode"},{"categories":["DHCP","Kea"],"contents":"Kea is a flexible open source DHCP server that is becoming popular among system administrators. One of the most frequent feature requests was for an API for administrative tasks such as retrieving statistics, managing configuration, and certain run time aspects, like leases. Some of the functionality was there, but there was no way to communicate with Kea. At the same time, Kea was suffering from the old Unix paradigm of a daemon reading its configuration from a local text file. It\u0026rsquo;s a good approach, but in 2017 people expect something more from software that claims to be modern.\nISC engineers came up with a proposal for a RESTful API. Mozilla graciously decided to fund development through their Mozilla Open Source Support (MOSS) fund. Almost a year later, we are ready to show the results. The task at hand was not a trivial one, so we split the work into milestones.\nMilestone 1: Update Parser The first step was to update the parser. It was designed to read the configuration from a file on local disk. This is an approach that works well for smaller installations, but it was problematic for larger, modern deployments where there are many servers and the configuration is being pushed to those servers remotely. Kea had multiple, complex parsers spread in many files in multiple directories. This was difficult to maintain and extend. Also, the handling of defaults was confusing - the default values were spread out in many places and it required detailed code inspection to find out what the actual default was.\nWe significantly re-wrote Kea\u0026rsquo;s parsers. The new parsers are much simpler to understand, use, and extend. They print out more meaningful errors, so users have a better clue about what is wrong in their configuration and how to fix it. In the process a couple of new useful features were added: multi-line comments, bash (#) and C++ (//) style comments, and file inclusions. For details, see here.\nMilestone 2: Add a REST API Kea already used JSON commands over its control channel (which is a Unix domain socket), so we decided to extend this capability to a REST interface. A basic REST interface was implemented as an optional control agent (people who don\u0026rsquo;t need the management interface simply don\u0026rsquo;t run the daemon). As a result, Kea 1.2 now supports many more commands that can be used over REST/http: you can set configuration remotely, retrieve configuration, tell Kea to write configuration to disk, and do many other things the API offers.\nMilestone 3: Make it robust and secure The API was functional, but we still needed to secure it. The obvious way to protect http communication is to use TLS. We had a dilemma: we could implement TLS support on our own or use existing tools available for it. Getting cryptography right is a challenging task, so we opted to use the best methods that were already available. As a result, there are two Kea configurations that expose secure https (with TLS) interface, using Apache or Nginx. Example configurations are available in the doc/examples/https directory.\nAnother significantly improved aspect is a move towards asynchronous handling of the socket operations. This opened up the possibility to handle commands and responses larger than 64K and paved the way for future extensions that may be large (e.g. returning thousands of leases from a subnet or configuring hundreds of subnets at once). These changes will be included in Kea 1.3.\nMilestone 4: Make good use of the capabilities Finally, the last step was to embrace the new interface with an application. We wanted a feature that would showcase the API, but also a useful feature at the same time. We picked leases, which are \u0026ldquo;contracts\u0026rdquo; between the DHCP server and the devices it serves. We wanted to provide as much flexibility as possible. We designed 10 new commands that help with creating new leases, querying and inspecting existing leases using various search criteria, retrieving them, updating them, and even removing them when necessary. All the code for handling those commands is now written. The code is already available in the Kea github repository (commits for lease4-add, lease6-add, lease4-get, lease6-get, and for lease4-del, lease6-del, lease4-update, lease6-update, lease4-wipe, lease6-wipe). All of these changes will be included in the upcoming Kea 1.3.\nGoing forward The first two milestones are already released as Kea 1.2. The remaining code is available on github and is scheduled to be released in the upcoming Kea 1.3 update later this autumn. The work will not stop there. We already have plans to extend the API. We have requests for new commands, new parameters for existing commands, and management interfaces for existing and upcoming features. Kea users definitely do not suffer from a shortage of ideas! Over time we will make the best possible usage of the code that was created with Mozilla\u0026rsquo;s help.\nLet me take the opportunity to personally thank Mozilla and the Mozilla Open Source Support (MOSS) fund for their generous support of this work. We did our best to make the best use of the funds to listen to the open source community and deliver high-quality open source code for the benefit of all users. We also made sure the result is something that will continue to yield benefits for the community in the future. Thank you.\nTomek Mrugalski\nKea Lead Engineer, ISC\n","permalink":"/blogs/restful-interface-for-kea-dhcp-server/","tags":["DHCP","Kea","RESTful","API","MOSS","REST"],"title":"RESTful Interface for Kea DHCP Server"},{"categories":["BIND","DNS","Performance"],"contents":"ISC Performance Lab We have been monitoring BIND 9 performance using the ISC Performance Lab, described in this 2016 blog post. We realized we really needed something like this when users reported to us a significant performance drop when upgrading from version 9.9.5 to either 9.9.6 or 9.10.0. We now review performance changes every month and when we see a negative change, we can identify the commit that caused the change and investigate.\nAn Index, Not a Benchmark These measurements are designed to identify changes in performance from one release to the next, not to measure typical or maximum performance in actual production use. Although we show the absolute number of queries per second on the charts below, we have not made any effort to tune the configuration for our hardware, and these results are not necessarily predictive of performance in another configuration or on other hardware. Many features can have a significant performance impact (e.g. query logging), and our tests don\u0026rsquo;t measure other important factors, such as the speed of adding and removing zones.\nUse these results to help estimate the change in performance you might see when migrating from one release or branch to another. First, identify which of these \u0026rsquo;example applications\u0026rsquo; is most like your deployment:\n1 Million Delegations (e.g. a TLD) 1 Million Zones (e.g. a large domain hoster) Recursive (For recursive tests we send queries that target a separate authoritative server that holds both a single zone with 1M delegations and each of the 1M small zones referred to therein.) Then, find the result for the release you are running today, and compare with the release you are considering migrating to.\nThe data points below each represent the average of 3 test runs, with each test run consisting of 30 measurements. Changes of ~20KQPS in either direction are not significant given our test variability and sample size.\nOur recently posted set of BIND maintenance releases should not introduce any noticeable performance changes for anyone upgrading on the same branch. What caused the drop in performance between 9.9.6 and 9.9.9 / 9.10.4? We introduced a change in 9.9.6 and 9.10.0 that was intended to improve performance, that caused a huge drop in KQPS. The drop in KQPS was due to reducing the number of UDP listeners to NUM_PROCESSORS / 2. We made this change because of a UDP locking bug in older versions of RedHat EPL, that led us to believe that this would improve performance.\n3751. [tuning] The default setting for the -U option (setting the number of UDP listeners per interface) has been adjusted to improve performance. [RT #35417] The change DID improve performance in several tests on our hardware, but the only 8-core machine we had was 5 years past EOL at the time, and in retrospect, not a good basis for making a performance adjustment.\nAfter several other users reported a performance regression on their hardware, and a vigorous internal debate, we made a second change in 9.9.9 and 9.10.4 which brought performance back to the earlier level.\n4236. [func] On machines with 4 or more processors (CPU), the default value for the number of UDP listeners has been changed to the number of detected processors minus one. [RT #40761] It was after this performance regression that we purchased a modern multi-core machine for performance testing, and Ray Bellis began developing the ISC Performance Lab. We advise testing on your own hardware to get the optimal setting in this Knowledgebase article on \u0026lsquo;Choosing the right value for -U when starting named.\u0026rsquo;\n9.11 Branch Performance Another, smaller drop in performance from 9.10 to 9.11 can be seen by comparing the 9.11 and 9.10 charts. While addressing a bug reported by a user, we introduced another performance regression in 9.11. BIND was incorrectly preserving case-sensitivity from the zone_name in a zone statement over what is defined in the zone itself. We(1) have made subsequent changes that mitigated the performance impact of case preservation, but some performance penalty is inherent in careful preservation of the case in the authority. (For those who are interested, the fix is one of the changes in commit# 4605, currently committed to the 9.12 branch, which we will be backporting to 9.11.3.)\nUsing our Performance Lab, we have been able to validate a number of changes which will come out in our next major branch, BIND 9.12, that will improve performance significantly. We will provide an update on that when we release the BIND 9.12.0 alpha.\n(1) Both of the performance regressions were diagnosed and fixed by BIND core developer Mukund Sivaraman, who has had a special focus on BIND performance improvement.\n","permalink":"/blogs/bind9-performance-history/","tags":["BIND","DNS","performance"],"title":"BIND 9 Performance History"},{"categories":["DHCP","ISC DHCP"],"contents":"ISC DHCP Updated Updates to both supported branches of ISC DHCP are available now from our downloads page. For this release, we focused specifically on updating the DHC client software, although there are many server updates as well. We had a lot of help with this release, specifically from our open source packagers and OEM users.\nWhile we haven\u0026rsquo;t been able to review or accept every submitted patch, we are trying to publish those submitted patches via our newly-opened bugs.isc.org database. As part of the overall maintenance process, we are also reviewing and updating our library of Knowledgebase articles on ISC DHCP, including for example, this one on the DHC client.\nOpen source users and packagers who submitted bug reports or patches that resulted in changes in ISC DHCP 4.3.6:\nDavid Zych at the University of Illinois Norm Proffitt of Infoblox Fernando Soto at BlueCat Networks (x3) Jiri Popelka at Redhat (x2) Marius Tomaschewski at SUSE Wei Kong at Novell Mritunjaykumar Dubey at Nokia Alessandro Gherardi Mark Nejedlo at TDS Telecom Rodney Beede Bill Parker Pierre Clerissi Martin Pitt and Andrew Pollock Tim Dean Jay Ford, University of Iowa THANK YOU!\nPreview of ISC DHCP 4.4.0 Our next planned release of ISC DHCP will be a new feature branch, 4.4.0. We expect this will be the last new feature branch of ISC DHCP issued.\nAs previously announced, we will be re-licensing ISC DHCP under the Mozilla Public License 2.0 with this release. One significant objective is to resolve the long-standing dependency on including BIND libraries in the ISC DHCP version by enabling ISC DHCP to use the BIND libraries in the BIND version in the package. We will also be updating the DDNS dual-stack for non-compliant clients. We are considering changing our default behavior when receiving a prefix length hint for a prefix that is not available (in a DHCPv6 prefix-delegation scenario). We have added a feature that provides an EUI-64 DUID based address allocation mechanism. We are considering implementing dhcp-cache-threshold for IPv6 mode. We are still reviewing many other candidate features and fixes for this release, to determine what we will be able to include with our limited resources.\n","permalink":"/blogs/isc-dhcp-4-3-6/","tags":["ISC DHCP","new release"],"title":"ISC DHCP 4.3.6 and 4.1-ESV-R15 Posted"},{"categories":["BIND","DNS"],"contents":"New maintenance versions of each of our supported branches are available from our downloads page. We have also updated our 9.9 and 9.10-based Stable Preview releases; both are available to eligible support subscribers through your support queue.\nWe issue BIND maintenance versions approximately every 6 months.\nPreview of BIND 9.12 Our next planned BIND release will be a new feature branch, BIND 9.12, currently planned for December 2017. The 9.12 release will include significant refactoring of key functions in BIND, plus implementations of several improvements to increase efficiency and resilience, including NSEC Aggressive Use and serving stale data when the authority is unresponsive. We are excited to be including a number of changes which improve authoritative performance dramatically for the root and TLD zones, and significantly for other deployment scenarios.\nWe are ceasing support for lwresd, and recommending that any remaining users consider adopting getdns instead.\nConsider updating your signing algorithm Due to recent developments in cryptography, we will be removing the existing default signing algorithm from dnssec-keygen in BIND 9.12; users will now be required to specify signing algorithms explicitly when generating a key with this utility. We are not deprecating any supported algorithms at this time.\nWe hope this encourages a transition to more secure algorithms, but it will add another configuration choice for new users, and it may break scripts that currently rely on the existing default behavior. The dnssec-keymgr utility can help implement an algorithm change; update its default signing algorithm by changing the policy configuration file.\nRelease notes are posted next to the downloads. Below are some highlights.\nMajor changes in 9.11.2 New DNSSEC Root Key ICANN is in the process of introducing a new Key Signing Key (KSK) for the global root zone. BIND has multiple methods for managing DNSSEC trust anchors, with somewhat different behaviors. If the root key is configured using the managed-keys statement, or if the pre-configured root key is enabled by using dnssec-validation auto, then BIND can keep keys up to date automatically. Servers configured in this way should have begun the process of rolling to the new key when it was published in the root zone in July 2017. However, keys configured using the trusted-keys statement are not automatically maintained. If your server is performing DNSSEC validation and is configured using trusted-keys, you are advised to change your configuration before the root zone begins signing with the new KSK. This is currently scheduled for October 11, 2017.\nThis release includes an updated version of the bind.keys file containing the new root key. This file can also be downloaded from https://www.isc.org/bind-keys.\nWindows XP No Longer Supported As of BIND 9.11.2, Windows XP is no longer a supported platform for BIND, and Windows XP binaries are no longer available for download from ISC.\nSecurity Fixes (previously issued in patch releases) An error in TSIG handling could permit unauthorized zone transfers or zone updates. These flaws are disclosed in CVE-2017-3142 and CVE-2017-3143. [RT #45383] The BIND installer on Windows used an unquoted service path, which can enable privilege escalation. This flaw is disclosed in CVE-2017-3141. [RT #45229] With certain RPZ configurations, a response with TTL 0 could cause named to go into an infinite query loop. This flaw is disclosed in CVE-2017-3140. [RT #45181] Bug Fixes Fixed a bug that was introduced in an earlier development release which caused multi-packet AXFR and IXFR messages to fail validation if not all packets contained TSIG records; this caused interoperability problems with some other DNS implementations. [RT #45509] Reloading or reconfiguring named could fail on some platforms when LMDB was in use. [RT #45203] Due to some incorrectly deleted code, when BIND was built with LMDB, zones that were deleted via rndc delzone were removed from the running server but were not removed from the new zone database, so that deletion did not persist after a server restart. This has been corrected. [RT #45185] Thank You Thank you to everyone who assisted us in making this release possible. If you would like to contribute to ISC to assist us in continuing to make quality open source software, please visit our donations page at https://www.isc.org/donate/.\n","permalink":"/blogs/bind-9-11-2/","tags":["BIND 9.10","BIND 9.11","BIND 9.9","BIND","maintenance","release"],"title":"BIND 9.11.2, 9.10.6 \u0026 9.9.11 Maintenance Versions Posted"},{"categories":["BIND","DHCP","Kea"],"contents":"ISC\u0026rsquo;s open source, including BIND 9, ISC DHCP, and Kea, is sustained by software support contracts. Open source software is free for the user, but developing it does cost money. Our software systems are large and complex; the developers are doing a professional job like any other software development job. Most of the money to support these projects comes from revenues from software support. This totalled $4,165,726 in 2015 and $4,187,219 in 2016*. We also receive some funding in the form of grants or small donations, generally less than about $200K per year. ISC is non-profit: we spend this money to maintain our software and deliver the support services.\nOnly a very tiny percentage of our user base supports us financially. In 2016 we had 106 support subscribers. We don\u0026rsquo;t really know how many users we have, but most of our maintenance releases are downloaded from ISC between 10,000 - 30,000 times, and we know that many users get our software from other sites or in other packages. The organizations who pay ISC for software support are helping us to develop the open source for everyone, and many of them subscribe just in order to support open source.\nISC sells annual software support contracts to a wide array of organizations. The two pie charts below illustrate the approximate percentage of our overall support revenue that comes from each of the segments.\nService providers, including cable and mobile access providers as well as traditional carriers, provided most of our support in both 2015 and 2016. The next most significant customer segment is developers of commercial products that incorporate our open source, which we refer to as \u0026ldquo;OEMs.\u0026rdquo; ISC open source is incorporated into many commercial and open source products, including applications, embedded devices, and operating systems.\nThe revenue percentages are not indicative of the number of support customers in each segment. We provide different levels of support, ranging from Advance Security Notification only, to 7x24 support with SLAs, and we charge small and non-profit organizations less than larger commercial organizations and service providers. In 2016, for example, our 8 subscribing OEMs provided us with more support than the 13 ccTLDs, 10 educational organizations, and 8 consultants combined.\nWhere in the world does the ISC\u0026rsquo;s open source support come from? It is hard to say precisely where our support comes from geographically, since many large commercial organizations have operations in multiple countries. However, it is clear that most of our support comes from organizations headquartered in the US and Europe.\nWe believe we have a lot of open source users outside the US and Europe, based on the apparent location of people downloading software from our website. Not surprisingly, subscribers in the US and Europe are subsidizing the open source for users in other parts of the world.\n* These numbers might not exactly match those in ISC\u0026rsquo;s audited financial statements for 2015 and 2016. These are 12-month contracts for which we recognize revenue on a monthly basis as we deliver the services. This analysis was based on the 12-month value of the contracts we signed in each year, and was not adjusted to recognize the exact number of months of revenue realized in each calendar year.\n","permalink":"/blogs/who-pays-for-iscs-open-source/","tags":["finance","money","open source","revenue","supporters"],"title":"Who Pays for ISC’s Open Source?"},{"categories":["BIND","DNS"],"contents":"There are three reasons why you want a root server near you:\nPerformance (faster response time) Resilience (less affected by global infrastructure problems) Privacy (less opportunity for eavesdropping) Introduction In the early days of the Internet, most international traffic passed through the USA. Traffic between two nodes in Japan that didn’t use the same ISP was often routed through San Francisco. Traffic between two European countries was often routed either through New York or Amsterdam.\nThe creation and growth of local Internet exchanges — variously called IXs, IXPs, NAPs, or MAEs — allowed local traffic to remain local. If ISPs for both of those Japanese nodes were connected to an Internet exchange in Japan, then traffic could be routed there rather than through North America. It was a tremendous improvement. Yes, the Internet is global, and yes, it is good that any Internet node can communicate with any other Internet node, but if the traffic between them is routed through a local IX rather than a distant hub, the communication is faster, more reliable, and less vulnerable to outages caused by events far away.\nPerformance Most Internet transactions or connections begin by looking up something in the DNS, and many DNS lookups begin with a query to a root server (if there is no local cached copy). DNS data is cached by all of the lookup software (\u0026ldquo;resolvers\u0026rdquo;) but there are limits (both from common sense and from technology) to the length of time that data can or should be cached.\nSo unless there is a root server close to you, there is a good chance, the probability of which is impossible to figure, that an Internet transaction will begin by making a long-distance query to a faraway root server. This takes time. The protocols are very robust; if the first faraway root server queried does not return a result, the lookup software will search for another root server, probably somewhere else, keeping up this search until it finds one. That search-if-it-fails algorithm is very reliable, but it is not very fast. Under some circumstances that search might take several seconds.\nResilience When all is well on the global Internet, having a nearby root server provides a moderate increase in response time and reliability. It is during one of the inevitable denial-of-service attacks that the advantage becomes vital.\nFrom time to time, there are hostile DoS attacks on parts of the Internet. Denial of Service. Some dark force that has captured or bought access to a \u0026ldquo;botnet\u0026rdquo; of compromised computers will unleash all of the computers at its disposal to send forged traffic to the victim systems. These attacks are usually magnified by leveraging off the failures of un-upgraded, un-patched personal computers or personal devices. The net result is that Internet traffic in the vicinity of the victims becomes hundreds or thousands or millions of times heavier than normal. The usually-adequate long-distance circuits become bogged down with attack traffic and are not useful for much else.\nIf there is no root server close to you, so that your root queries must travel over long-distance circuits to reach a server, and those long-distance circuits are so overloaded that your queries are lost, then your Internet will become useless. You might only need to communicate with a business that is within easy walking distance, but if your root DNS service requires sending queries to faraway places, then your resolver’s cached root data will eventually time out, and your communication will fail.\nFor the best protection against being collateral damage in a global attack, your community should have its own IX. Every IX should have at least one root server. That combination will permit Internet service to continue in your community regardless of what data storms are happening in the rest of the world. The various content distribution services are installing content servers in most locations, so if you have your own IX and your own root server and your own copy of most major content, you can continue most operation even when there is a major cable cut.\nPrivacy Queries made to root servers convey information about the names being looked up. Sometimes that information is best kept private. The farther a query has to travel, the greater the likelihood that someone is snooping. If the root server is local, then root DNS queries can only be snooped by people with local access. When a query travels a long way, you don’t really know who owns the stops along the way, and who might be listening.\n","permalink":"/blogs/why-you-want-a-root-server/","tags":["DNS","F-Root","resilience"],"title":"Why You Want a Root Server Nearby, Even If You Have to Host It Yourself"},{"categories":["DHCP","ISC DHCP"],"contents":"We have decided to move ISC DHCP to the Mozilla Public License (MPL 2.0).\nIn 2016 we re-licensed BIND and Kea under the Mozilla Public License 2.0. At the time we solicited public comment, and talked to many stakeholders about it. In the end, we didn\u0026rsquo;t see any negative impact on our users from that change. Now we plan to make the same shift from the ISC license to the MPL 2.0 license for ISC DHCP.\nThe reason for making the software license change is to encourage continued community contributions to the ISC DHCP project, particularly from those who benefit most from the free open source, vendors incorporating ISC DHCP into commercial products and services.\nAlthough we are aware that the MPL 2.0 license may not require the contributions we are hoping for, it seems to be an acceptable middle ground between the current ISC license and the GPL license.\nThis change will apply to ISC DHCP version 4.4.0 and any 4.4.x versions.\nWe will add the MPL 2.0 license terms as we update or change files, beginning with the 4.4.0 beta version. Older versions already published under the ISC license will remain under the ISC license. We will be highlighting the change in our release notes and in the release announcements. In addition, we will be updating our contributor guidelines so technical contributors are aware of how their contributions will be licensed.\nThose unsure whether or not the license change affects their use of ISC DHCP, or who wish to discuss how to comply with the license, may contact ISC at info@isc.org or use our contact form.\nWe encourage anyone making a product that incorporates ISC DHCP to subscribe for annual support from ISC - this is how we fund the open source maintenance. If you or your employer use ISC DHCP and you want to see the project continue as a strong and healthy system, please consider contributing, if you do not today. We welcome donations of any amount at our donations page.\nWe welcome comments from DHCP users, about this or anything else we do. Email Vicky Risk, Product Manager at vicky@isc.org or discuss on dhcp-users@lists.isc.org.\n","permalink":"/blogs/isc-dhcp-moves-to-mpl-2-0-license/","tags":["dhcpd","license","ISC DHCP"],"title":"ISC DHCP Moves to MPL 2.0 License"},{"categories":["BIND","DNS","Security"],"contents":"Over the past five years, we have taken, on average, 32 days to publicly disclose a BIND vulnerability, from the time we receive the first report. Typical steps from report to disclosure include:\nSet-up secure email link with reporter, request more details Reproduce in house (this can take a while, particularly if the reporter can't provide enough detail) Team meeting to get a consensus on CVSS score Allocate or request CVE# Determine affected software versions Develop, review, and confirm a fix; commit to multiple branches Draft and review CVE notice Create and test patched versions Notify support subscribers, root operators, OS packagers Disclosure: Update FTP server, web server, email user mailing lists, vulnerability matrix, Knowledgebase An average of 32 days to get all of that done is reasonable. Sometimes it takes longer, often because we are waiting to bundle several issues together, or to avoid announcing right before a holiday.\nWhat about the outliers buried in that average?\nIt is generally regarded as unresponsive to take more than 90 days from report of a vulnerability to publication. We have exceeded that only once in this period, in 2012, when it took us 98 days to disclose CVE-2012-5688, \u0026ldquo;BIND 9 servers using DNS64 can be crashed by a crafted query.\u0026rdquo; This report came from someone who was using heavily modified code, and it took a long time to find and identify the problem in our source.\nAt ISC, we take responsible vulnerability disclosure seriously. We follow a consistent, documented process which includes prioritizing, fixing, and disclosing security vulnerabilities promptly.\nReferences:\nReporting Security Vulnerabilities\nISC Software Defect and Security Vulnerability Disclosure Policy\nPublic disclosure notices may be found in our Knowledgebase, in the section headed \u0026ldquo;Security Advisories\u0026rdquo; under each ISC product.\nHere is the source data for the chart above.\nReport Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 5/24/2012 High TCP query load can trigger a memory leak 5 CVE-2012-3868 7/24/2012 61 6/28/2012 Heavy DNSSEC validation load can cause a \u0026ldquo;Bad Cache\u0026rdquo; assertion failure 7.8 CVE-2012-3817 7/24/2012 26 6/1/2012 Handling of zero length rdata can cause named to terminate unexpectedly 8.5 CVE-2012-1667 6/4/2012 3 7/25/2012 A specially crafted resource record could cause named to terminate 7.8 CVE-2012-4244 9/12/2012 49 8/28/2012 BIND 9 servers using DNS64 can be crashed by a crafted query 7.8 CVE-2012-5688 12/4/2012 98 9/21/2012 A specially crafted DNS data can cause a lockup in named 7.8 CVE-2012-5166 10/9/2012 18 12/12/2012 BIND unexpected closed when resolve RPZ and DNS64 7.8 CVE-2012-5869 1/24/2013 43 Avg Elapsed Days 2012 43 Report Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 2/21/2013 A maliciously crafted regular expression can cause memory exhaustion in named 7.8 CVE-2013-2266 3/26/2013 33 6/4/2013 A recursive resolver can be crashed by a query for a malformed zone 7.8 CVE-2013-3919 6/4/2013 0 7/15/2013 rdata Denial Of Service Vulnerability 7.8 CVE-2013-4854 7/26/2013 11 8/22/2013 A Winsock API bug can cause a side-effect with BIND ACLs 6.8 CVE-2013-6230 11/6/2013 76 12/18/2013 A crafted query against an NSEC3-signed zone can crash BIND 5.4 CVE-2014-0591 1/13/2014 26 Avg Elapsed Days 2013 29 Report Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 5/2/2014 prefetch crash. 7.8 CVE-2014-3214 5/8/2014 6 5/24/2014 EDNS options crash. 7.8 CVE-2014-3859 6/11/2014 18 10/17/2014 An infinite chain of delegations (in a maliciously crafted zone) can cause BIND to consume all resources while following the delegations. 7.8 CVE-2014-8500 12/8/2014 11/1/2014 Assertion failure when geolocation database not loaded 5.4 CVE-2014-8680 12/8/2014 37 Avg Elapsed Days 2014 28 Report Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 1/15/2015 Crash (assertion failed) upon attempting to retrieve trust anchor 5.4 CVE-2015-1349 2/18/2015 34 6/15/2015 Crash in name.c. 7.8 CVE-2015-4620 7/7/2015 22 7/13/2015 Failure to reset variable to NULL in tkey.c. 7.8 CVE-2015-5477 7/28/2015 15 8/1/2015 Assert in parsing of DNSSEC keys with malformed input. 7.8 CVE-2015-5722 9/2/2015 32 8/8/2015 Assertion failure parsing OPENPGPKEY wire data. 7.1 CVE-2015-5986 9/2/2015 25 10/19/2015 REQUIRE(rdataset-\u0026gt;rdclass == db-\u0026gt;rdclass) failed. 7.1 CVE-2015-8000 12/15/2015 57 12/1/2015 INSIST assertion failure in resolver.c:fctx_query(). 5.4 CVE-2015-8461 12/15/2015 14 12/29/2015 INSIST in apl_42.c when printing record. 6.8 CVE-2015-8704 1/19/2016 21 12/29/2015 REQUIRE when printing out OPT record. 5.4 CVE-2015-8705 1/19/2016 21 Avg Elapsed Days 2015 27 Report Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 1/22/2016 A REQUIRE assertion failure can be deliberately triggered in servers performing NXDOMAIN redirection. 5.4 CVE-2016-1284 2/3/2016 12 2/9/2016 REQUIRE assertion in named during rndc pre-authentication. 7.8 CVE-2016-1285 3/9/2016 29 2/19/2016 Crash with an INSIST when the response contains an RRSIG RR covering DNAME where the owner name of the RRSIG RR is not a subdomain of the query for any query (of any type). 7.8 CVE-2016-1286 3/9/2016 19 2/26/2016 named will crash with an INSIST when the response contains an OPT metarecord with multiple COOKIE options. 7.8 CVE-2016-2088 3/9/2016 12 6/21/2016 Potential DoS against lwresd via specially crafted query 5.4 CVE-2016-2775 7/18/2016 27 8/31/2016 An error in a buffer length check when rendering reply messages can cause an assertion failure in buffer.c when named is sent a specially constructed packet. 7.8 CVE-2016-2776 9/21/2016 21 10/1/2016 A specially crafted packet could crash older (unsupported) versions of BIND 7.8 CVE-2016-2848 10/20/2016 19 10/20/2016 Badly constructed DNAME zone data on authoritative server might cause a resolver to crash 7.5 CVE-2016-8864 11/1/2016 12 10/31/2016 A malformed response to an ANY query can cause an assertion failure during recursion 7.5 CVE-2016-9131 1/11/2017 72 11/2/2016 An error handling a query response containing inconsistent DNSSEC information could cause an assertion failure bind 7.5 CVE-2016-9147 1/11/2017 70 11/14/2016 An unusually-formed DS record response could cause an assertion failure 7.5 CVE-2016-9444 1/11/2017 58 12/7/2016 An error handling certain queries using the nxdomain-redirect feature could cause a REQUIRE assertion failure in db.c 7.5 CVE-2016-9778 1/11/2017 35 Avg Elapsed Days 2016 32 Report Date Description CVSS V2 Score CVE Number Public Release Date Elapsed Days 1/17/2017 An unhandled interaction between RPZ and DNS64 can cause a segmentation fault when named attempts to get the record count of a disassociated record set. 7.5 CVE-2017-3135 2/8/2017 22 2/9/2017 DNS64 Wildcard Break DNSSEC 5.9 CVE-2017-3136 4/12/2017 62 2/21/2017 A response packet can cause a resolver to terminate when processing an answer containing a CNAME or DNAME 7.5 CVE-2017-3137 4/12/2017 50 03/20/17 named exits with a REQUIRE assertion failure if it receives a null command string on its control channel 6.5 CVE-2017-3138 4/12/2017 23 5/4/2017 RPZ looping FORMERR 3.7 CVE-2017-3140 6/14/2017 41 05/14/17 Windows Service path not quoted 7.2 CVE-2017-3141 6/14/2017 31 Avg Elapsed Days 2017 38 This post was updated after initial publication, to correct an error in the publication date for CVE-2017-3136. Publication of CVE-2017-3136 was originally scheduled for 8 March and was delayed when we received the two later vulnerability reports; all three were subsequently released together on April 12, 2017.\n","permalink":"/blogs/days-disclosure-bind9-cves/","tags":["BIND","CVE","vulnerability","security"],"title":"BIND 9 CVEs – Days from Report to Disclosure"},{"categories":["BIND","ISC DHCP"],"contents":"Update: to access the bug database, go to bugs.isc.org, click on the Guest login, and select \u0026ldquo;bug-list\u0026rdquo; for either DHCP or BIND 9.\nWe are excited to announce that, beginning July 7th, we will finally be enabling read-only Guest access to our BIND 9 and ISC DHCP bug database. I know what you are thinking.\nEvery other open source project has had an open bug database for years. What took ISC so long?\nFirst problem: Software support for Guests First of all, our issue tracker software did not have a \u0026ldquo;Guest\u0026rdquo; access feature. We use Request Tracker from Best Practical. It is open source and otherwise meets our needs perfectly. We did spend a couple of years talking about migrating to JIRA, but that is a big project and we didn\u0026rsquo;t see the payback. Recently we noticed that the CPAN project was using RT and they had some kind of Guest access, so we asked Best Practical to port the module for Guest access to the software version we are using. They were great to work with, and recently delivered the patch. The feature enables us to create a new \u0026ldquo;public access\u0026rdquo; ticket queue which is viewable by anonymous guest users.\nNext problem: Safe handling for possible vulnerabilities We do not have any verified, undisclosed security vulnerabilities in our issue tracker.\nHowever, it is possible that someone could mine old crash reports, find something that was never reproduced, or otherwise extrapolate from a bug report and find a vulnerability we missed. We put a LOT of effort into prompt and responsible disclosure of vulnerabilities and we certainly don\u0026rsquo;t want to inadvertently publish something that could be used against our users. This is why we feel we need to review any previously-reported, unresolved issues before publishing them.\nThe hardest problem: Expectation of privacy We are also concerned about privacy. Some of our staff have told people posting on our public lists that they could email to our issue tracker if they needed a confidential exchange. Many people have sent in configurations, crash dumps, or explicit information about their network that they understood at the time would not be published. We have to respect past commitments we or former ISC employees may have made.\nWe don\u0026rsquo;t have time to read through our massive, dusty archives It is much easier to have an open bug database if you start out that way. At this point, ISC has accumulated 17 years\u0026rsquo; worth of embarrassing comments, ToDo list items, unverified and abandoned bug reports, and possible user confidential information in our issue tracker. For example, we have over 7,000 resolved BIND issues, over 800 rejected BIND issues, and several hundred open issues, most of which were created by ISC employees. This is simply much more than we can possibly review or curate at this point.\nWe don\u0026rsquo;t have the time to review all these existing issues for publication, but we have decided we aren\u0026rsquo;t letting that get in the way of opening the bug database.\nNew, externally submitted issues will be publicly viewable by default The solution we have decided upon is to open up the issue tracker for new public reports, by default.\nIf you submit a bug to bind9-bugs@isc.org or dhcp-bugs@isc.org, or via our web bug report form after July 7th, we will triage it and put it into the open, viewable bugs queue - unless you specifically ask us not to, or it looks like it could be a security vulnerability. In the latter case, we will keep it in a non-public queue. Newly submitted suggestions sent to bind-suggest or dhcp-suggest@isc.org will go straight into a publicly-viewable queue without triage.\nNote that when you submit an issue your email address and any attachments will also be visible. This is why we are announcing this change early - so that we have plenty of time to get the word out to submitters.\nISC internally-created issues ISC employees create most of the issues in our issue tracker. Our development team opens issues for feature development (or any code commit), and to keep track of ideas for future work. Issues can also be opened automatically (e.g. by Coverity, which is scanning for possible regressions), or by our technical support team, reporting a problem on behalf of a support customer. We will obviously not be publishing any specific data provided by ISC support customers, but otherwise, we will try to be open about as many of our own issues as possible.\nIn particular, we want to enable users to look up the issues resolved in a recent release, and read more detail on the bugs that were fixed. We also want users to be able to search, using their current and former email addresses, to retrieve old issues they submitted, to find out what happened to them. We are hopeful that people interested in contributing patches will consider attacking some of the open issues. Finally, we hope that some users will add their comments (via the email gateway) to issues they find in the open issue tracker. Next Steps We are still working on installing and configuring the new software, and re-testing it, to get it ready for July 7th. We are looking forward to sharing this information with our users and technical contributors.\nWe don\u0026rsquo;t want to discourage anyone from sending in bug reports, so if you have any concerns about your privacy, just mention in the body of your report, whether you submit it via email or the web, that you prefer that we keep your report confidential, and we will. Every bug report is read by one of our engineers and triaged, so we can simply put your issue into the non-public queue.\nWe have had to disable comments on our website and blogs, due to overwhelming abuse, but we invite your feedback on our bind-users and dhcp-users mailing lists, or directly to info@isc.org.\n","permalink":"/blogs/bind-and-isc-dhcp-bug-db-opening-for-guest-users/","tags":["BIND","bugs","community","ISC DHCP","issues"],"title":"BIND and ISC DHCP Bug DB Opening for Guest Users"},{"categories":["BIND","DNS"],"contents":"One of the major challenges with logging network traffic is that it is very disk I/O intensive. It can also require a lot of storage. The storage requirements often lead to the use of file compression algorithms such as gzip to reduce the amount of disk space needed. For DNS traffic this can typically result in an 80% reduction in file size.\nThere are two commonly implemented ways of compressing pcap files. The simplest is to have the capture application write its output to \u0026ldquo;stdout\u0026rdquo;, and then pipe that output into the \u0026ldquo;stdin\u0026rdquo; of a compression program, e.g.:\ntcpdump -w - | ( gzip -c \u0026gt; output.pcap.gz \u0026amp; ) The other is to just have the files written out in their normal format, and then use a post-processing script to find (completed) pcap files and compress them.\nThe pipe method however is incompatible with tcpdump\u0026rsquo;s (and dnscap\u0026rsquo;s) automatic file rotation feature - there\u0026rsquo;s no way to close the current file and restart the compression when a new file is opened. If you are performing continuous packet capture then file rotation is essential, so this method is impractical.\nThe disadvantage of using a post-processor, though, is that it further dramatically increases the disk I/O load. Every packet ends up being written to disk in uncompressed form, and then later the post-processor has to read back the contents of that file, compress the data, and output a new file, most likely at the same time as yet more uncompressed data is being saved to the next file. If the packet capture is being performed directly on a network server this additional I/O load can adversely affect the operation of the server.\nA solution is required, therefore, that supports file rotation and on-the-fly compression such that uncompressed data is never written to disk. As we were planning to deploy DNS-OARC\u0026rsquo;s dnscap to support monitoring of the forthcoming root zone DNSSEC key roll I decided to see if I could build this functionality directly into dnscap.\ndnscap uses libpcap to read and write data, and specifically the pcap_dump_open(pcap_t *pcap, const char *path) function to save files. However libpcap also has a function pcap_dump_fopen(pcap_t *pcap, FILE *fp) which allows the caller to pass a handle to an already opened file to libpcap instead of having libpcap open the file for itself. All libpcap I/O uses stdio internally.\nA little known (albeit non-standard) feature of modern stdio implementations is the ability to create a file handle that uses user-supplied read, write, seek and close functions instead of using the standard POSIX functions that work on a UNIX file descriptor. On Linux systems glibc provides fopencookie() and BSD-derived systems have funopen(). An excerpt from the manual page for the latter is shown below:\nSYNOPSIS #include \u0026lt;stdio.h\u0026gt; FILE *funopen(const void *cookie, int (*readfn)(void *, char *, int), int (*writefn)(void *, const char *, int), fpos_t (*seekfn)(void *, fpos_t, int), int (*closefn)(void *)); DESCRIPTION The funopen() function associates a stream with up to four ``I/O functions''. Either readfn or writefn must be specified; the others can be given as an appropriately-typed NULL pointer. These I/O functions will be used to read, write, seek and close the new stream. ... The calling conventions of readfn, writefn, seekfn and closefn must match those, respectively, of read(2), write(2), lseek(2), and close(2) with the single exception that they are passed the cookie argument specified to funopen() in place of the traditional file descriptor argument. By happy coincidence (or more likely by design) the standard gzip compression library contains functions that almost perfectly match these requirements. With just a small amount of additional code you can create a FILE * handle that acts exactly like a normal file handle to the caller, but that transparently compresses the output data before it\u0026rsquo;s saved to disk.\nWithin dnscap the call to pcap_dump_open() can be replaced with code like this (NB: error handling code omitted for brevity and showing only the BSD funopen method):\nstatic int gzip_cookie_write(void *cookie, const char *buf, int size) { return gzwrite((gzFile)cookie, (voidpc)buf, (unsigned) size); } static int gzip_cookie_close(void *cookie) { return gzclose((gzFile)cookie); } pcap_dumper_t *dump_open(pcap_t *pcap, const char *path, int want_gzip) { if (want_gzip) { gzFile z = gzopen(path, \u0026#34;w\u0026#34;); FILE *fp = funopen(z, NULL, gzip_cookie_write, NULL, gzip_cookie_close); return pcap_dump_fopen(pcap, fp); } else { return pcap_dump_open(pcap, path); } } The cookie argument is an opaque pointer that is typically a pointer to a structure containing per-context data relating to the file being handled. In this case the gzip library\u0026rsquo;s gzFile type is used directly as that cookie.\nPatches to dnscap that automatically generate compressed output if the save suffix ends with .gz have been accepted by DNS-OARC and are in the 1.5.0 release.\nIn our tests this was found to be very much more efficient than using a post-processor, with vastly decreased I/O load and also a beneficial drop in CPU usage.\n","permalink":"/blogs/efficient-compression-of-packet-captures/","tags":["dnscap","fopencookie","funopen","libpcap","stdio","tcpdump"],"title":"I/O Efficient GZip Compression of Packet Captures"},{"categories":["Kea","DHCP"],"contents":"Kea 1.2.0 was released a couple of days ago. You may have noticed that there are two new premium features available. We have received some questions.\nIs Kea becoming commercial? No. Kea is and will remain open source. Rumors that Kea is becoming a commercial product are simply not true. Our first commercial add-on - the Kea Forensic Logging library - has been available since 1.1. We simply started adding more commercial add-ons to it.\nKea has been in development since 2011. While we did enjoy one sponsorship grant and several custom development contracts and have recently signed a few support contracts, the vast majority of Kea development was internally funded by ISC. That means that for every single year since Kea\u0026rsquo;s inception, it was generating a loss for ISC. Sadly, ISC is not a wealthy company and this kind of funding is unsustainable in the long term. Therefore, we started looking at ways to improve the financial standing of Kea. One key point here is that the goal here is to break even, not make a profit from Kea. ISC is a non-profit entity and we can\u0026rsquo;t turn anything into a significant source of profit.\nWhy do you need money for an open source project anyway? We like to think that the products we deliver are commercial-quality. Kea is a project that is run by a small team of skilled engineers (two engineers working full-time with two others contributing occasionally). We also have one person doing independent validation (he develops tests, runs them, and takes care of our server farm) and there\u0026rsquo;s some management and planning involved (we need to decide what are the goals of each milestone, which tickets go in, whether we\u0026rsquo;re on track or lagging behind, etc.).\nThere are other costs involved as well. We run an extensive build and test farm with around 20 machines, a mix of metal and VMs. Those systems do builds, run unit-tests, system tests, performance measurements, static code analysis, perform memory leak validation, sanity check the build release process, and many other things. They need maintenance, consume power, and need lab space.\nKea engineers are active in the community and attend conferences. This also costs money. We are involved in the IETF standardization process (see this list of published RFCs; some of them are authored by Kea engineers); therefore, you can meet us at IETF meetings. We also sometimes show up at other meetings, like RIPE or regional operator meetings (e.g. PLNOG or UKNOF), where we can meet users, get requirements, and talk about Kea.\nWhat features will be premium features? The question was raised as to what sort of functionalities are going to end up as premium features. This is always determined on a case-by-case basis, but the major determining factor is whether the feature is used by small deployments (will go into open source) or by big deployments (may possibly go into premium). Let me give you a specific example. Our recent addition, the Host Commands library, adds extra commands to manipulate (retrieve, add, and delete) host reservations. You can already do that in the open source in several ways: you can insert the record into the database, or you can update your configuration file and tell Kea to reload it. The reconfiguration usually takes fractions of a second, but may take longer if your configuration is extensive. Smaller deployments will most likely find that acceptable. However, if you happen to have many thousands of users and you constantly add or remove devices to your network, you will find host commands useful.\nWhat about contributions? The code available as premium is 100% developed at ISC. It does and will not include any contributed code. If we accept a patch, it is offered to ISC under a license (typically it\u0026rsquo;s MPL2, the same as Kea, but on one occasion we received code under Apache license). Whatever the license is, ISC received the code under that license and we can\u0026rsquo;t and won\u0026rsquo;t republish it under a different license.\nWhat about donations? We occasionally receive donations and we are very grateful for them. Unfortunately, they are not a realistic way of supporting Kea for two reasons. First, they\u0026rsquo;re completely unpredictable (as opposed to our electricity bills coming in regularly and our engineers\u0026rsquo; families needing to eat and pay their bills). Second, they could perhaps cover part of the electricity bill, but that\u0026rsquo;s about it.\nIsn\u0026rsquo;t there any other way? There is, at least in theory. We could have all of the code as open source and ask people who run networks using Kea to share some of their profits by buying a Kea support contract. We tried that for almost 2 years and it wasn\u0026rsquo;t very successful. So we are trying to make the offering more attractive by adding more benefits to the support package. Once the Kea project gets into a better financial situation, we will gradually push more features into open source and fewer into premium.\nWhat can I do to help? The answer is different depending on your situation. Some engineers using Kea like it and they decide to donate some amount of personal money. That\u0026rsquo;s certainly a very nice gesture and we\u0026rsquo;re very grateful, but that is unlikely to help Kea survive financially. If you happen to work for an company that runs Kea and you have many customers, consider signing a support contract for Kea. Perhaps you\u0026rsquo;re an engineer who can\u0026rsquo;t make such decisions. In such case you can help by convincing your boss that getting a support contract is the right thing to do; you will get more out of the software and you will help develop Kea further. You may want to take a look at the Kea page of our website for more details. Also, feel free to contact us to get more information. You can also consider a custom development contract. See the next question for details.\nIf you\u0026rsquo;re a small company or perhaps you run a university campus without any significant funding, there are other things you could do. First, if your deployment is small, in the range of low thousands of devices, you are unlikely to absolutely require any premium features. If you deployed Kea and it works well for you, you can help make it more popular by coming forward and sharing your successful deployment story.\nBut I want to make sure that feature X goes into open source. Is there anything I can do? Certainly. In the past we have completed several custom development contracts. We were approached by a company that wanted a feature; after a brief discussion we came up with a contract that contained written technical requirements and a note that the software developed would be published under an open source license. That\u0026rsquo;s how Kea got IPv6 prefix delegation support, for example.\nHopefully these answers have cleared up any concerns. If you have any further questions, please feel free to post a message on kea-users or feel free to chat with any ISC person if you happen to find us at a conference.\nTomek Mrugalski\n","permalink":"/blogs/funding-kea/","tags":["funding","open source","sponsors"],"title":"Funding Kea"},{"categories":["Kea","DHCP"],"contents":"Open source DHCPv4/v6 server New in Kea version 1.2:\nREST API enables remote management http interface (https is coming with Kea 1.3) JSON command syntax example Python client included, or write your own We are grateful to Mozilla for the 2016 MOSS award that helped to sponsor our work on remote management for Kea. Kea 1.2 includes patches contributed to our Github repo by Andrei Pavel and Cristian Secareanu of Qualitance, zeitounator, parisioa, Patrik Lundin, vlegout, sebschrader, and MrMarvin. Thank you for these patches, and thank you to everyone who tested and submitted reports.\nIncluded with every Kea Support Subscription\nDownload Kea\nFind out more about Kea!\n","permalink":"/blogs/kea-1-2-adds-rest-api/","tags":["DHCP","Kea","Mozilla","REST","API"],"title":"Kea 1.2 Adds REST API"},{"categories":["BIND","DNS"],"contents":"Maintenance - Plus\nOne hundred and seventy-eight tickets were resolved with 9.9.10, 9.10.5, 9.10.5-S and 9.11.1.\n35 of these were minor features or feature changes and 13 were test items.\nWe incorporated 15 submitted patches, contributed by:\nHannes Frederic Sowa - Use IP_PMTUDISC_OMIT if available Thomas Anderson - Fixing a build failure problem LaMont Jones - \u0026ldquo;This patch has been kicking around in the Debian tree for quite some time.\u0026rdquo; Tony Finch - No fewer than NINE changes from Tony! Stacey Jonathan Marshall - Solaris Software (Allow krb5-config to be used when configuring gssapi) Xose Vazquez Perez - Making sure we were aware of the IPv6 address for G.root Petr Spacek - From when he was still at RedHat (we were sorry to see him leave!) The Subscriber Preview edition, 9.10.5-S, included two very significant changes:\nEDNS Client-Subnet Identifier (ECS) for caching resolver operations\nOur implementation uses a white-list to identify servers to send client-subnet information to. Naturally we also respect the client signalling requesting privacy. Newly re-written Response Policy Zones (RPZ) engine\nThe re-implemented RPZ is 100% backwards compatible with the older RPZ implementation (according to our tests; please let us know if you find any differently!). We have eliminated the blocking that could occur when processing large RPZ updates, significantly improving usability for heavily-loaded systems. This is the first installment of our 2017 BIND refactoring effort. There is more to come in BIND 9.12. ","permalink":"/blogs/bind-april-2017/","tags":["BIND","maintenance","release"],"title":"BIND 9 Updates, April 2017"},{"categories":["ISC","Company News","Annual Report"],"contents":"\u0026ldquo;ISC is dedicated to developing software and offering services in support of the Internet infrastructure.\u0026rdquo;\nOnce a year, we attempt to catalog what we did the prior year towards supporting the infrastructure. We do have a small team who are very busy keeping F-Root going, as well as the hosting services we still provide for some non-profits, but they are too busy to give me a list of accomplishments, so this blog focuses on our software-related contributions.\n8 New Internet Standards We are huge believers in standards. If there are three legs to the Internet, one is Open Standards, one is Open Source, and the third is, I don\u0026rsquo;t know, maybe relationships? It used to be relationships, today it might be bandwidth, subscribers or \u0026ldquo;eyeballs.\u0026rdquo;\nISC developers participate vigorously in the creation and evolution of the Internet standards. In 2016, eight of the drafts ISC staff co-authored reached RFC status. Of course, we reviewed and contributed to many other drafts that we did not author. We have already implemented RFC 7873, and most of RFC 7766 in BIND 9. We published the design for Catalog Zones as implemented in BIND 9.11 as an IETF draft, in the hope that other vendors will implement for interoperability.\nRFC 7766 DNS Transport over TCP - Implementation Requirements, J. Dickinson, S. Dickinson, R. Bellis, A. Mankin, D. Wessels\nRFC 7793 Adding 100.64.0.0/10 Prefixes to the IPv4 Locally-Served DNS Zones Registry, M. Andrews\nRFC 7819 Privacy Considerations for DHCP, S. Jiang, S. Krishnan, T. Mrugalski\nRFC 7824 Privacy Considerations for DHCPv6, S. Jiang, S. Krishnan, T. Mrugalski\nRFC 7828 The edns-tcp-keepalive EDNS0 Option, P. Wouters, J. Abley, S. Dickinson, R. Bellis\nRFC 7844 Anonymity Profiles for DHCP Clients, C. Huitema, S. Krishnan, T. Mrugalski\nRFC 7873 Domain Name System (DNS) Cookies, D. Eastlake 3rd, M. Andrews\nRFC 7969 Customizing DHCP Configuration on the Basis of Network Topology, T. Lemon, T. Mrugalski\n4,500 people visited our online EDNS compatibility checker at ednscomp.isc.org to test their servers in 2016. This is a grassroots effort to stimulate improved Internet compatibility for advanced DNS features such as DNSSEC and DNS cookies that appears to be modestly useful.\n16 Conference Presentations ISC staff presented at NANOG, APRICOT, RIPE, PLNOG, ICANN, and a DNS BOF in Japan in 2016, plus we put on four webinars. I realize not all of you can attend conferences, but the proceedings are generally published and this is what we do instead of \u0026ldquo;marketing.\u0026rdquo;\nSlides for all of these are posted on the ISC website under Community -\u0026gt; Presentations.\nIn addition to the presentations we have given, we make other contributions to industry organizations. Tomek Mrugalski is co-chair of the IETF DHC working group, Ray Bellis is currently Program Committee chair for DNS-OARC, Eddy Winstead has given several NANOG On-the-Road tutorials in 2016, Cathy Almond is vice-chair of the UKNOF Programme Committee, Fred Baker and Warren Kumari represent ISC on the Root Server System Advisory Committee (RSSAC), Ray Bellis and Mukund Sivaraman are members of the RSSAC Caucus, and Jeff Osborn is on the ISOC Advisory Committee.\n24 BIND Releases Issued BIND continues to be an important open source resource for the Internet. The most recent estimate we can find, from one of Geoff Huston\u0026rsquo;s experiments in October 2015, showed that 55% of all DNS queries on the Internet originate from BIND resolvers.\nThe majority of our development and support efforts are spent on BIND. The team continues to crank out a number of maintenance and security releases every year.\nWe issued maintenance versions on the 9.9, and 9.10 branches, plus our subscriber edition, based on 9.9. We published 10 security vulnerabilities and issued 19 patch releases. A number of the vulnerabilities were found in house, through our fuzz testing program. We have transitioned to using the new version 3 of the CVSS scoring system because the rest of the industry was migrating, but we don\u0026rsquo;t see significant improvements in the accuracy of scoring for DNS vulnerabilities. The CVSS system is the only general standard for scoring vulnerabilities, but it can result in high-severity BIND vulnerabilities despite what seem to us fairly unusual circumstances. Also, advances in fuzz testing have increase the rate at which we are seeing vulnerability reports, so we need a better system to identify those vulnerabilities that are real threats to your DNS, versus those that are extreme corner cases.\nIn 2016 we held all the vulnerabilities discovered between late November and the end of the year for announcement in 2017 to give operators a break with applying BIND patches. We would have broken our silence for a zero-day vulnerability, but it seems to have worked out well.\nAlthough we have developed what we think is a good process over the years, handling security vulnerabilities consumes a lot of our resources, interrupting our work and delaying releases. We are also very much aware that patching vulnerabilities is a burden for our users. We are considering further changes to our process going forward, to reduce the volume of security releases, while protecting the installed base.\nOne of the biggest changes in 2016 was the transition to the MPL2.0 open source license for BIND 9. We asked for and received a number of comments about the choice of license, and eventually made the change with the 9.11.0 release.\nOver a dozen new features in the new 9.11 branch The focus for 9.11 was on provisioning improvements for authoritative operators. Major new features in BIND 9.11 include Catalog zones, dnstap support, a DNSSEC key management utility, negative trust anchors for DNSSEC, support for standardized DNS cookies, new RNDC commands for viewing and updating zones, an embedded database option for dynamic zones, nxdomain-redirect, two features facilitating migration to IPv6, a message-compression option, transfer-message-size, and many smaller feature changes. We worked with RedHat on a significant patch, the Dyndb interface, which integrates BIND with the FreeIPA software system.\nBIND 9.12 will be a much shorter cycle BIND 9.10 was released in May of 2014. The next major version, 9.11 wasn\u0026rsquo;t ready for final release until October of 2016. The time between versions was so long (29 months) that some contributed patches became obsolete and we had added integration problems. The delay was frustrating for users who were waiting for new features that had been in planning for several years. We have decided to significantly shorten our new branch cycles for the next major BIND version in order to have a \u0026ldquo;fresher\u0026rdquo; major version. We expect to release BIND 9.12 in December, 2017, 14 months after 9.11. This could have an impact on our EOL planning that we haven\u0026rsquo;t fully realized yet, because more frequent releases + refactoring initiative = more versions with more variance between versions =\u0026gt; higher maintenance burden.\nBIND Testing - Added 2 dozen new complex system test environments We transitioned our on-going continuous integration testing for BIND9 from Robie to the newer Jenkins system. This has enabled us to add a number of additional operating systems and configurations to our regular testing. We now test on Debian 7.7, FreeBSD 10.2, FreeBSD 11, OSX 10.11, OpenBSD 5.7, Solaris 10 (Sparc), Ubuntu 13.10, Ubuntu 16.04, and Windows Server 2012. Adding support for Windows was a significant effort, spurred by the observation that we have a large number of downloads of the free Windows executables from our website. We added about two dozen new complex system test environments (for example, with and without threads, openssl, dnstap, libtool, and \u0026ldquo;enable-developer\u0026rdquo;(which is the kitchen sink mode) to our automated build testing. We continue with internal fuzz testing, still using the AFL tool. This has been a very helpful and fruitful addition to our regular quality process for BIND. Towards the end of 2016, we kicked off a BIND refactoring project, aimed at improving maintainability and quality by reducing code complexity. 2 New IOS Apps We released a DIG application on the Apple iTunes store for iPhone and iPad applications. This is a port of the dig utility from BIND, intended as a convenience for experienced users. As a companion to this, in 2017 we have added a DNS Checker utility, which checks resolver support for interoperability, including EDNS support.\nISC DHCP Despite the fact that DHCP is a well-established protocol, there continue to be new issues, new feature requests, and new products incorporating ISC DHCP. Cloud architectures drive some of these changes. At the Open Compute networking day at Facebook we learned that ISC DHCP is a fundamental component of Canonical\u0026rsquo;s new \u0026ldquo;Metal As A Service\u0026rdquo; tool, aimed at bootstrapping bare metal into application containers (maas.io).\nWe issued two DHCP maintenance releases in 2016, DHCP 4.3.4 and 4.1-ESV-R13. Version 4.3.4 included numerous submitted patches, including a cluster of patches for the contributed LDAP back end, and added experimental support for DHCPv4 over DHCPv6. After releasing 4.3.4 we started working on a 4.3.5 and 4.4.0.\nWe continue to support the 4.1 branch, even though it is past its announced EOL date, in order to keep options open for users. ISC DHCP 4.1 versions are also much smaller footprint than 4.3 versions, so keeping 4.1 around is useful for embedded applications.\nWe published one security vulnerability in ISC DHCP in 2016. An attacker could abuse the remote management interface, OMAPI, exhausting resources available for TCP connections.\nTowards the end of 2016, Thomas Markwalder took over as the lead maintainer for ISC DHCP.\nKea After releasing Kea 1.0 in December of 2015, we continued active development on Kea, releasing Kea 1.1 in the fall of 2016.\nThe major feature in Kea 1.1 was a more complex client classification system that now supports \u0026ldquo;test expressions\u0026rdquo; to determine what class a client belongs in. Kea client classification now has 3 levels, compared to the 6-7 levels of hierarchy in ISC DHCP. Kea 1.1 expanded support for retrieving host reservations from a database backend. We also added parameter passing for communicating with Kea \u0026ldquo;Hooks\u0026rdquo; extensions.\nWe have seen quite a bit of user interest in the overall opportunity of maintaining DHCP configuration in a shared database, and since we don\u0026rsquo;t plan to add DHCP failover to Kea if we can help it, we hired a consultant to help us test Kea in a high-availability configuration, using a MySQL database cluster backend for \u0026ldquo;failover.\u0026rdquo; Results of this test showed us that we have several opportunities for improving both Kea\u0026rsquo;s integration with the database, and our documentation and ease of use for deployment with the database backend.\nKea 1.1 included experimental support for DHCPv4-over-DHCPv6 - RFC7341, as well as preliminary support for another backend, Cassandra. The Cassandra support was contributed by a contractor working for Deutsche Telekom. We are thrilled to have DT using Kea in production and funding this external effort on Kea. Cassandra is quite different from our MySQL or the contributed PostgreSQL backend, and requires quite a bit of incremental development, testing and documentation.\nAt the Berlin IETF in July, a team including core Kea developers Tomek and Marcin worked with engineers from the Sysrepo project and Deutsche Telekom on experimental Yang support. Read Tomek\u0026rsquo;s blog post for more information.\n2016 saw a substantial uptick in the operating system packaging for Kea. There are now packages available for RedHat, CentOS, Ubuntu, Debian, Fedora, Arch Linux, FreeBSD, OpenBSD, and NetBSD. We also see a steady stream of messages on the Kea-users mailing list from new adopters asking questions and requesting additional features. We are hopeful that 2017 will be the year in which we finally see substantial adoption of Kea.\nKea is our most OPEN open source program at ISC:\nKea welcomes contributions via Github. Kea uses an open issue database that anyone can review at kea.isc.org, which includes our requirements and roadmap. The Kea continuous integration testing is done using Jenkins automation. The current test status is always visible at https://jenkins.isc.org. Despite the efforts we are making to engage with the user community, we are still struggling with the financial support model for DHCP open source. 2016 was disappointing as far as getting Kea support subscribers. We created a commercial hook application to help support the project, the forensic logging application, but so far, have not figured out how to effectively market it. However, late in 2016 we received a $100,000 award from the Mozilla Open Source Software program, which will fund work on a remote management interface for Kea in 2017.\nSummary Statistics Software Software Releases Net LOC added in 2016* Issues** Opened in 2016 Issues Resolved in 2016 Security Vulnerabilities BIND 9 24 38,677 551 486 10\n(plus 1 only in subscriber-only version) ISC DHCP 2 3,582 146 133 1 Kea 1 27,721*** 253 143 1 * Additions net of deletions, January 2016 - January 2017 (measured via OpenHub)\n** Issues include defects, feature requests, and project tasks, so this is more of an overall activity index than a quality index.\n*** Kea stats are November 2015 - November 2016\n","permalink":"/blogs/2016-accomplishments-another-year-of-open-source-networking-software/","tags":["2016","annual report","open source","results","transparency"],"title":"2016 Accomplishments – Another Year of Open Source Networking Software"},{"categories":["BIND","DNS"],"contents":"We have just released a new utility in the Apple app store. It tests recursive DNS servers for conformance against current DNS protocol specifications, especially with regard to EDNS(0) as specified in RFC 6891.\nLike our other utility on the app store, Dig, this is a free download, and we do not provide formal support for it. Ray Bellis, ISC\u0026rsquo;s DNS Researcher, is the author.\n","permalink":"/blogs/dns-checker/","tags":["BIND","DNS","utility"],"title":"DNS Checker"},{"categories":["BIND","DNS","Security"],"contents":"The first release of BIND 9 was in September 2000. In the intervening 16 years, we have issued 225 more releases, give or take a few. We have continuously added new additional features and RFCs.\nBIND 9 is a big project: at last count there were 691,554 lines of code* in BIND. That is 3 times the size of PowerDNS, 5 times the size of the Unbound resolver, and 6 times the size of the Knot authoritative server. According to the Cocomo model, the BIND 9 codebase is estimated to represent 138 person-years worth of software development effort.\nBIND 9 is complex As we have added to the original BIND 9 over the years, the code has gotten increasingly complex. This complexity has made it difficult and error-prone to modify. Since we cannot test all the code paths in some of the more complex areas of the code, we may introduce new bugs inadvertently. External developers tend to be limited to working on only the less-complex areas of the code, and even the core team is reluctant to modify some logic.\nWe tried and failed to do a complete rewrite of BIND already, through the BIND 10 project. Recently, inspired by the alarming experience of spending several weeks trying to pinpoint the source of a severe bug in particularly complex part of BIND, we have decided to start gradually refactoring BIND 9. The goal is to rationalize and simplify some of the most complex functions to improve maintainability. In the process, we hope to also improve quality, and in some areas, remove performance bottlenecks.\nHow complex is BIND 9? The McCabe Cyclomatic Complexity Index is one well-known measure of the complexity of a function. This measures how many different paths there are through the code. As a general rule, if C is the complexity, then:\nC \u0026lt; 10: Function is easy to maintain 11 \u0026lt;= C \u0026lt;= 20: Function is harder to maintain C \u0026gt; 20: Function is a candidate for refactoring number of functions in BIND 9 with C \u003e 20 number of functions in BIND 9 with C \u003e 50 number of functions in BIND 9 with C \u003e 100 535 93 26 Reality check Witold Kręcicki, the BIND developer who proposed this refactoring project, devised an index to measure the overall complexity of a software system, based on how many complex functions it has.\nWPK Maintainability Index This index measures how many functions need refactoring, and indicates how deep that refactoring needs to be.\nHe looked at the complexity of several other software systems, including two from ISC, and two newer DNS systems. In the chart below, lower numbers indicate a need for more extensive refactoring. Overall, BIND is more complex than Kea, Knot, or PowerDNS and less complex than ISC DHCP. This correlates with what we know about the maintainability of those projects.\nValidation of complexity vs CVEs We also checked to see where our worst bugs, the critical defects that are published as CVEs, are located. We expected to see some correlation with high complexity code, and we found it.\nIn the past 2 years ISC has published 21 CVEs in BIND 9. 18 of those were in current public versions of named (2016-2848 was in an unsupported version that was shipped in some operating system distributions, 2016-2775 was in lwresd, 2016-1284 was in our subscription branch only).\nOut of these 18 bugs, 13 were in overly complex functions (cyclomatic complexity \u0026gt; 20). 10 were in very complex functions (\u0026gt; 60) (see table below for more details).\nResources for refactoring Our current plan is to refactor three major functions and files in our next major release, BIND 9.12. We estimate this will consume around 25% of our BIND development resources. This means, one engineer will be dedicated to refactoring while the remainder of the team focuses on fixing bugs, supporting users, and adding new features.\nWe will target the most complex functions which we know are frequently exercised code paths, where we have a lot of demand for new development. The goals for each function will vary slightly, but overall, the objectives are:\nmake the complex functions more modular rationalize functions that were layered on top of other features reduce complexity for resulting functions to McCabe 50 or less assess and improve test coverage Refactoring targets for BIND 9.12 Our initial targets for refactoring include:\nRPZ - Response Policy Zones. This feature enables a \u0026ldquo;DNS firewall\u0026rdquo; function, by enabling re-writes of responses for some queries. This is an area of frequent enhancement requests but it has also been the source of some serious bugs. The design goal is to disentangle the RPZ implementation from rbtdb.c (the core red black tree data structure of BIND) so that RPZ policy structures are updated similar to catalog zone processing in 9.11. This will reduce scope for any new bugs, and make RPZ more readily understandable and maintainable. query_find(). This function has an incredible complexity index of 453 before refactoring. Refactoring query_find will mean refactoring all of query.c. The goal is to create a central structure to encapsulate the state and pare off any functions that can be externalized, passing state back and forth. This will result in a number of simpler, easier to understand (and more easily testable) functions. resquery(), answer_response(), noanswer_response(). This is the source of CVE-2016-9131 and CVE-2016-9147. Each of these is fairly complex (resquery = 154, answer response is 74, noanswer_response is 79). Refactoring these functions will mean rewriting resolver.c. We hope to complete these three and release them by the end of 2017 in BIND 9.12. After we finish that, we hope to be able to continue with refactoring. A few top candidates for 2018 are:\nSocket code BIND was written for a single CPU. named has one thread running on one core, and receiving all the requests, and it gives each job to a worker. This causes a lot of context switches, and moving work from one core to another core is costly. The idea is to have multiple listeners, one on each core. This will require redesign of the handling of the incoming connections. rbtdb.c. The BIND red black tree database is the main data store in BIND. We would like to split it into separate functions for cache vs. zone storage. The funding challenge We are limited in what we can do by our funding model: we are funded primarily by support revenues from users who subscribe to annual support contracts. These users are paying for priority action on bugs that impact them, attention to feature requests, and troubleshooting and diagnostic help. While they will certainly welcome improved code quality, the reality is that everyone would like someone else to fund that. In addition, refactoring is going to mean putting new feature development in some areas on hold, while we are re-writing functions those features will use.\nSo, while we plan to dedicate 25% of our BIND resources to refactoring, we may have to modify that plan if we can\u0026rsquo;t find funding for refactoring.\nOur first-year goals are to refactor or redesign a small percentage of what needs to be refactored. We hope to be able to continue the refactoring, and deepen it to include removing obsolete features and associated code, in coming years. If we can do this, we can rejuvenate BIND and prolong its relevance for another decade.\nUnderwriting the refactoring effort If your organization would like to support this BIND refactoring effort, please contact sales@isc.org to discuss making a donation. Individual donors, consider making a donation to ISC and mentioning \u0026ldquo;refactoring\u0026rdquo; in your comments.\n* All figures provided for Lines of Code include blanks and comments.\nAppendix - Source of recent CVEs vs code complexity CVE Most complex fn in bugfix Function complexity 2016-9778 query_find 453 2016-9444 cache_name 98 2016-9147 answer_response 76 2016-9131 cache_name 98 2016-8864 cache_name 98 2016-2776 dns_message_rendersection 38 2016-1286 answer_response 76 2016-1285 ns_control_docommand 65 2015-8461 resquery_response 154 2015-8000 resquery_response 154 2015-5722 pkcs11dh_fromdns 28 2015-5477 dns_tkey_processquery 29 2015-1349 keyfetch_done 72 ","permalink":"/blogs/bind-9-refactoring/","tags":["BIND","BIND 9.12","bugs","CVE","maintenance","refactoring","vulnerability","security"],"title":"BIND 9 Refactoring"},{"categories":["DNS"],"contents":"An important tool used in managing IPv4 traffic is address blacklisting; if a server is found to only or mostly produce junk, a reputation service (Spamhaus, Phishtank, or others) will add the address to its list, and networks worldwide will block it, using tools like SpamAssassin, null route advertisement in BGP, or DNS RPZ. Similar reputation services are used for names as well.\nThe question has arisen: \u0026ldquo;Will IPv6 end address blacklisting\u0026rdquo;?\nI think not, but the mechanisms used will likely change.\nThere are a lot more addresses, and they can be used pretty creatively. Some companies, for example, are experimenting with running a /64 (the prefix normally applied to a LAN) to a physical chassis and using it to identify containers within the system. That allows them to use BGP between a physical chassis and the routing fabric in a data center. An IPv6 address, in that context, replaces an IPv4 address+port pair. Also, temporary addresses are more easily created. Windows and MacOS place their permanent address into DNS and use it for incoming sessions; outgoing sessions use temporary addresses that change periodically, as a means of topology obfuscation.\nI suspect that reputation services and blacklisting will be alive and well in the IPv6 network, but might be applied to prefixes instead of addresses, or larger (shorter) prefixes rather than smaller (longer) ones. We still have miscreant nodes, and we will still need to filter them out.\nThe place I scratch my head about in such cases isn\u0026rsquo;t the number of addresses. It\u0026rsquo;s the fact that they can change (which is also true in IPv4, but easier in IPv6). In networks using address autoconfiguration, address creation is so easy that people have seriously suggested creating a new temporary address for each TCP connection opened. If normal software can do that, so can malware, which becomes an attack on both the local routing system and the viability of the reputation services.\nIn a data center using prefixes for chassis, one approach to that will be to move the \u0026ldquo;good\u0026rdquo; containers to another machine, and therefore prefix, and kill the remaining ones. That also remediates malware the system may be running. When the prefix is removed from the blacklist, it can be put back into circulation. On LANs, the principles of RFC 4192 can be used to renumber it by deploying a new prefix and then removing the old.\n\u0026ldquo;A good reputation,\u0026rdquo; a wise man once said, \u0026ldquo;is more valuable than costly perfume.\u0026rdquo; Responsible blacklist providers offer a way to remove single addresses, or to at least contest their blacklisting (e.g. https://www.spamhaus.org/lookup/), but today these don’t enable you to rehabilitate an entire prefix. For IPv6 networks, if we block by prefix, we will need to rehabilitate by prefix.\n","permalink":"/blogs/will-ipv6-end-address-blacklisting/","tags":["IPv6","IPv4","blacklist"],"title":"Will IPv6 End Address Blacklisting?"},{"categories":["BIND","DNS"],"contents":"Update September 28, 2017 - ICANN has decided to postpone the October 11th key rollover step. No new schedule is available yet. https://www.icann.org/news/announcement-2017-09-27-en\nUpdate August 2, 2017 - see this detailed more recent description of the Root KSK Rollover in BIND.\nExecutive Summary If you manage a DNS resolver, you may need to take action due to the upcoming root key rollover.\nIf you use BIND with \u0026ldquo;managed-keys\u0026rdquo; for the root zone or \u0026ldquo;dnssec-validation auto\u0026rdquo;, there is low risk. If you use BIND with \u0026ldquo;trusted-keys\u0026rdquo; for the root zone, you need to update your configuration. Anyone setting up a new BIND instance around the time of the root key rollover will need to pay careful attention, to ensure their system is able to initialize properly. Organizations repackaging or redistributing BIND will need to update their distributions in 2017 to ensure any new installations that happen during or after October 2017 include the new key. There are some risks in all configurations, discussed below.\nWhat is the Root Key (KSK) Rollover? The DNS root key is a cryptographic public-private key pair used for DNSSEC signing of the DNS root zone records. The root zone key-signing key (KSK) serves as the anchor for the “chain of trust” that enables DNS resolvers to validate the authenticity of any signed data in the DNS. The integrity of the DNS depends on a secure root key.\nIn 2017 IANA will replace the current key with a new key. It is important to roll cryptographic keys periodically to help protect them from compromise. In this case, since this is the very first time the key has been changed since the DNS root was initially signed in 2010, it is also a test of the rollover process.\nTimeline Matt Larson presentation at DNS-OARC, Dallas, TX, Oct 2016\nThe rollover includes multiple steps. It has already begun, and will continue through 2017.\nKey dates are:\nOctober 27, 2016: KSK rollover process begins as the new KSK is generated. July 11, 2017: Publication of new KSK in DNS. September 19, 2017: Size increase for DNSKEY response from root name servers. October 11, 2017: New KSK begins to sign the root zone key set (the actual rollover event). (NB: Postponed as of September 28, 2017) January 11, 2018: Revocation of old KSK. March 22, 2018: Last day the old KSK appears in the root zone. August 2018: Old key is deleted from equipment in both ICANN Key Management Facilities. (Dates for these steps are published by ICANN at https://www.icann.org/resources/pages/ksk-rollover/#overview\nImpact of Changing the DNSSEC Root Key DNS Message Size Increase\nThe first impact is increased size of DNS messages containing the root keys. The process of replacing the 2010 key with the 2017 key requires a period of overlap, during which the 2010 key is still used for signing, but the 2017 key is also present, so that systems can learn the 2017 key and \u0026ldquo;trust\u0026rdquo; it, based on the 2010 key. While both keys are present, they will increase the size of some messages in the DNS. Even after the 2010 key is removed, the messages will still be larger than they were before the key rollover was started, because the 2017 key is larger than the 2010 key. (The larger 2017 key is of course harder to crack, and therefore more secure.) While all the effects of this increase in message size are difficult to predict, this issue has been studied extensively and is not believed to pose any significant problem. (see ICANN\u0026rsquo;s external design team\u0026rsquo;s ICANN Root Zone KSK Rollover Plan for detailed analysis or Geoff Huston\u0026rsquo;s more recent blog posting on the impact of the root key rollover). In addition to that, ICANN\u0026rsquo;s operational plans are also available.\nImpact on Validating Resolvers\nOf much greater concern is the possible impact on validating resolvers. All DNS resolvers that are currently validating, or that wish to enable DNSSEC validation, must be updated with the public half of the 2017 root KSK, and they have to switch from using the 2010 key to the 2017 key for validation at the right time. This is an issue that applies to any DNS software, not just BIND. A query to a resolver that tries to use the 2010 key to validate after the rollover will fail, and any signed DNS data will not be available to those users. This would mean a blackout of DNSSEC-signed zones for users of validating resolvers that don’t adopt the updated key. Since the root zone is signed, this would effectively blackout the root, and with it, the entire DNS tree, for those users.\nIt is not difficult to ensure your DNSSEC-validating BIND resolver is properly updated to track the root key rollover. BIND stores the public half of the root key in a local trust anchor database. This database is created when you install BIND and is initialized with the bind.keys file included in the BIND distribution. After initial installation, BIND provides two ways to manage that database: trusted-keys and managed-keys. What, if anything, you need to do during the root key rollover of 2017 depends on which method you are using. We recommend the use of managed-keys, because with managed-keys, BIND will self-update when the root key changes. If you are using BIND with trusted-keys, you will need to update your configuration prior to October, 2017, to avoid a potentially serious service impact.\nNo Impact on Authoritative Publishers\nIf you are managing an authoritative BIND system, the root key rollover should not have any impact on your operations. It would still be wise to be aware of the timing of the rollover, particularly if you are signing your zones.\nBIND with Managed-Keys Managed-Keys is the newer, recommended method for maintaining the trust anchor database, and it is the default configuration. If you used the “dnssec-validation auto;” shortcut, that command will have enabled managed-keys for you.\nThe managed-keys method was introduced in BIND 9.7.0 and permits BIND to automatically update the trust anchors. A BIND resolver configured to use managed-keys will initialize itself upon installation with a default anchor. This anchor comes from a supplied initial-key, if you specify one in the managed-keys statement, the bind.keys file or a built-in default trust anchor, if neither of the other seeds are provided. During initialization, BIND fetches the DNSKEY RRset directly from the zone apex, and validates it using the key specified in the managed-keys statement. If the DNSKEY RRset is validly signed, then it is used as the basis for a new managed keys database.\nAfter the initialization, when BIND is restarted, if there is a keys database, BIND will use that, rather than the key specified in the bind.keys file. The bind.keys file is not consulted, or used, after initial installation, so changing the bind.keys file on a BIND server that is already initialized to use managed-keys will not have any useful effect.\nInitializing the BIND managed-keys database\nAfter it is running, BIND observes if there are new trust anchors being introduced for the root, and downloads them and updates the trust anchor database. If you are using managed-keys, BIND will obtain the new root KSK automatically, once it is published in July 2017. A running resolver will query the root periodically for the trust anchor point, whether anyone is querying the resolver or not, so this is not dependent on having a lot of traffic.\nThe process of updating and putting the new key into service is described in IETF RFC 5011, which BIND supports.\nWhat to do if you are using BIND with managed-keys\nRelax! You probably don’t have to do anything at all. Verify that your BIND instance has acquired the new key after it is published in the DNS, in July, 2017. If you run a RNDC managed-keys status, you would see the new key marked as ‘added’ after July 2017 publication.\nIf you are using multiple views, check each view to ensure the new key is showing. (BIND 9.7.0 permitted only one managed-keys statement per system, but in BIND 9.8 this was expanded so that there can be a separate set of managed-keys per view.)\nIf you are running an older version of BIND, and don’t have access to the RNDC managed-keys query, you may have to check the .mkeys files in each view, as below, to see if the new keys were added. From the ARM:\nIf named is not configured to use views, then managed keys for the server will be tracked in a single file called managed-keys.bind. Otherwise, managed keys will be tracked in separate files, one file per view; each file name will be the view name (or, if it contains characters that are incompatible with use as a file name, the SHA256 hash of the view name), followed by the extension .mkeys. Monitor ICANN for updates on the process as there may be changes in the dates, or new information provided. Be on the lookout for DNSSEC validation failures when the key in use changes to the new key. BIND with Trusted-Keys Trusted-Keys is the older, manual method for maintaining the trust anchors. ISC recommends against using trusted-keys for the root zone unless you are an expert user with a specific use case requiring manual key management. If you are still using trusted-keys, the best way to avoid an outage during the upcoming key rollover is to change from trusted-keys to managed-keys.\nWith the trusted-keys method, the system administrator must manually update the bind.keys file, which BIND uses to create the trust anchor database. When you download BIND, there is a copy of the current bind.keys file in the root directory, and another is always available from the ISC website. This file contains the public half of the root KSK.\nWhat to do if you are using BIND with trusted-keys\nIf you can, the best thing would be to update your configuration to use managed-keys, at least for the root zone key. If you choose this option, tt is essential to make this change before the key rollover occurs in October, so that BIND can successfully initialize managed-keys with the 2010 key prior to the rollover.\nThe easiest way to enable managed-keys is to simply use the shortcut, dnssec-validation auto; . You must also have a bind.keys file with the 2010 key included, which you can get by downloading a fresh copy of BIND, or by just downloading and installing the keys file itself. To initialize the managed-keys database, restart BIND with managed-keys enabled and a valid bind.keys file present. If you do this prior to the October 2017 rollover, BIND will validate the 2010 key and self-maintain during the rollover.\nIf you want to continue using trusted-keys for the root zone, you must ensure your bind.keys file is updated to add the 2017 root key before the rollover occurs. You can’t do anything to prepare until the new key is published. To get the new bind.keys file after the 2017 key is published:\nIf you download and run an updated version of BIND from ISC* after the new key is published (currently planned for February, 2017), the new key will be included. (* While we will update the software published by ISC, and we will alert the operating system packagers to update their packages, we have no control over what is provided in the various operating system packages.) If you then restart with a managed-keys configuration (either using managed-keys or dnssec-validation auto) BIND will initialize with the included key and self-maintain during the key rollover. If you don’t want to update your BIND software, you can download just the new key from ISC or ICANN and manually add it to the bind.keys file in /etc (it is moved there after installation), or replace the bind.keys file with a new one downloaded from isc.org, and restart BIND. This will cause BIND to load the keys in the new bind.keys file. Be sure to update the bind.keys file in your \u0026ldquo;master copy\u0026rdquo; of BIND if you have a provisioning system creating builds for new installations. Keep the 2010 key in the trust anchor database during the overlap and rollover period. When the old key is withdrawn, you may remove the old key (you don’t have to and it might be wisest to leave it in there just in case). In Summary If you are running authoritative services with BIND, or a resolver that is not doing DNSSEC-validation, you should not see an impact. If you are running a BIND validating resolver using managed-keys, relax, you should be fine. If you are curious, check for the new key in your managed-keys BIND instance after July, 2017. If you are running a BIND validating resolver using trusted-keys and you can upgrade to managed-keys, do so now, before the root key rollover In any case, we recommend monitoring news from ICANN about the root key rollover, as dates may change. Be on the lookout for possible DNSSEC validation failures on or around October 11th, 2017, in case of unforeseen problems. ISC will be releasing updated versions of BIND in 2017 that incorporate the new key. At this time, the new key is not yet ready to be published. ISC will be able to begin introducing the new key in BIND at some point after the successful completion of the next key ceremony, currently planned for February 2, 2017. Anyone providing re-packaged versions of BIND should plan to update their distributions in Q1 or Q2 of 2017. Older versions of BIND, for example in older operating system releases, must also be updated.\nCorner Cases We have tried to think of corner cases where BIND may not work as some might expect it to, so we can check BIND’s behavior, change it if necessary, or simply explain what to expect.\nBIND can’t update the managed-keys.bind file The working directory should be always be writable by named. If for some reason, it is not, then the managed-keys initialization process will not work properly, because it has to write to the managed-keys.bind file.\nRunning instance updates, but master copy is not updated A popular operational practice is to use an external provisioning system to create docker images or similar packages, to use when installing and re-installing BIND. If you are using a configuration management system like this, possibly over-writing the managed-keys file created on initial installation, you may have to update the bind.keys file in your provisioning system. Else, you run the risk of installing a version of BIND with an outdated key, that will not validate during installation.\nFresh install of an older copy of BIND It is important to download a fresh copy of BIND when doing a new installation, in case the root key has since been updated. If you install copy of BIND with only the 2010 key after the 2017 root key rollover, it will not initialize properly, and will not update itself with the new 2017 key. The trust anchor database must first initialize with a currently valid key. This is a pre-condition for the 5011 rollover support.\nFrom the ARM:\nThe first time named runs with a managed key configured in named.conf, it fetches the DNSKEY RRset directly from the zone apex, and validates it using the key specified in the managed-keys statement. If the DNSKEY RRset is validly signed, then it is used as the basis for a new managed keys database. Questions Q. Wouldn\u0026rsquo;t it be safest to disable DNSSEC validation? Uh, no. This is definitely the wrong time to give up on DNSSEC validation. After years of debate over the net value of DNSSEC, given that signed answers are larger and can be used for amplification attacks, DNSSEC validation is now looking like a great defense against one current significant abuse technique. The signed \u0026ldquo;negative space\u0026rdquo; in the DNS can prevent \u0026ldquo;random-subdomain\u0026rdquo; query attacks (see the IETF Draft on NSEC Aggressive Use or the Geoff Huston blog post on this). In addition, there is growing interest in using DNS with DNSSEC to ensure integrity of cryptographic keys for other applications, such as secure email.\nQ. How can you determine whether a BIND instance is configured for managed-keys or for trusted-keys? Check your named.conf file for a managed-keys or trusted-keys statement specifically for the root zone, “.”\nThe following is an example of what you would like to see:\nmanaged-keys { . initial-key 257 3 8 \u0026#34;AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcCjF FVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJRkxoX bfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efucp2gaD X6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA6G3LQpz W5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQdXfZ57relS Qageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhYB4N7knNnulq QxA+Uk1ihz0=\u0026#34;; }; If you are running BIND 9.11 or later, you can use the RNDC command,\nmanaged-keys (status | refresh | sync) [class [view]] You can use both managed and trusted keys for different trust anchors, so look specifically for the key for the root zone, “.”.\nIf you don\u0026rsquo;t have access to the relatively new rndc managed-keys feature, use the contributed script in contrib/scripts/check5011.pl. This tool has similar functionality to the rndc command. It reads the managed-keys.bind zonefile and reports on the status of the keys (trusted, untrusted, pending until such-and-such a time, revoked, etc.).\nQ. It says in the ARM that there is only one managed-keys statement per system, but here you mention one per view? The first time managed-keys was added to BIND, it was system-wide. Later it was updated to support a separate trust anchor database for each view. We didn\u0026rsquo;t update all the places in the ARM we should have when we made that change. We will fix the ARM, sorry about that.\nQ. Can I have more than one root key at a time? Of course. You can have more than one key defined for any zone. During the root key rollover, there is a period of several months when two keys will be published for the root.\nIf you have one trusted-key for \u0026ldquo;.\u0026rdquo; and a different managed-key for \u0026ldquo;.\u0026rdquo;, BIND loads up both keys \u0026ndash; one as trusted and the other as managed.\nIf you have the same key configured as both trusted and managed, BIND loads the managed and ignores the trusted.\nQ. What is the difference between dnssec-validation auto and dnssec-validation yes? The difference between dnssec-validation yes and dnssec-validation auto is subtle but important. Dnssec-validation yes requires that a managed-keys statement include an initial key. If you change dnssec-validation auto to dnssec-validation yes and there are no root keys configured in the managed-keys statement, then that signals to named that you no longer care about the root key and you want it to delete it from the database. You probably don\u0026rsquo;t want to do that in a production system. We recommend using dnssec-validation auto, which will use the built in root key as the initial trust key for initializing the database.\nQ. What is ISC going to do to help OS packagers update their packages? This is a significant concern, because many users get their BIND software via operating system packages. ICANN is considering a possible DNS-ecosystem wide CVE or similar operational notice to reach operating system packagers who publish versions of any DNS system. This notice would come out in 2017. ISC will participate in that announcement and will provide OS packagers of BIND with explicit information on how to incorporate just the new root key into their packages, particularly when they are packaging older versions of the software.\nThank You\nSeveral people reviewed and provided comments or answered questions for me. Besides the ISC BIND team I got advice and helpful comments from:\nEd Lewis, ICANN Timothe Litt, BIND user and contributor Tony Finch, BIND user and contributor ","permalink":"/blogs/2017-root-key-rollover-what-does-it-mean-for-bind-users/","tags":["BIND","root key"],"title":"2017 Root Key Rollover – What Does it Mean for BIND Users?"},{"categories":["BIND","DNS"],"contents":"We are proud to bring you another great version of BIND, 9.11.0. We have added many new features, updated our support for DNS standards, improved efficiency in a number of areas, and made provisioning and DNSSEC operations more convenient.\nBIND users now have three supported versions to choose from: 9.9, 9.10 and now, 9.11. We recommend that administrators run one of our older, stable branches, such as a 9.9- or 9.10-based release on larger-scale critical production systems, while testing the new 9.11 branch until the second maintenance release. Version 9.9 is our current Extended Support Version, with support guaranteed through December 2017. If you are wondering what the differences are between these releases, check the BIND 9 Significant Features Matrix.\nBIND 9.11.0 release notes are posted alongside the software download, and you can find many articles in our Knowledgebase about the new 9.11 features. We posted the recording of our webinar on the new features in our YouTube channel. Below are the highlights of major enhancements in BIND and BIND tools.\nCatalog Zones In 9.11, we have added a new method of provisioning secondary servers called \u0026ldquo;Catalog Zones.\u0026rdquo; This is an implementation of draft-muks-dnsop-dns-catalog-zones. This feature will automatically propagate new zones added to the primary master to the secondary servers, or remove zones deleted from the primary master, eliminating the need for separate scripts to do this.\nA catalog zone is a regular DNS zone which contains a list of \u0026ldquo;member zones,\u0026rdquo; along with the configuration options for each of those zones. When a server is configured to use a catalog zone, all the zones listed in the catalog zone are added to the local server as slave zones. When the catalog zone is updated (e.g. by adding or removing zones, or changing configuration options for existing zones), those changes will be put into effect. Since the catalog zone is itself a DNS zone, this means configuration changes can be propagated to slaves using the standard AXFR/IXFR update mechanism.\nThis feature should be considered experimental. Example catalog zone configurations can be found in Chapter 9 of the BIND 9 Administrator Reference Manual. For an overview of how to configure catalog zones, read A Short Introduction to Catalog Zones. JP Mens tested this feature prior to release and wrote a blog post on it.\nAddzone/Delzone Provisioning Improvements We have added some new RNDC commands to facilitate automating zone provisioning, and have also made a significant improvement to the way we store the zones added this way.\nrndc delzone can now be applied to zones which were configured in named.conf; it is no longer restricted to zones which were added by rndc addzone. (Note, however, that this does not edit named.conf; the zone must be removed from the configuration or it will return when named is restarted or reloaded.) When BIND is built with the lmdb library (Lightning Memory-Mapped Database), named will store the configuration information for zones that are added via rndc addzone in a database, rather than in a flat \u0026ldquo;NZF\u0026rdquo; file. This dramatically improves performance for rndc delzone and rndc modzone: deleting or changing the contents of a database is much faster than rewriting a text file. rndc modzone can be used to reconfigure a zone, using similar syntax to rndc addzone. Other RNDC Improvements RNDC can now be scripted using Python. A single RNDC connection is opened and used/re-used for the script.\nA read-only option is now available. A restricted set of rndc commands is allowed, which can report information from named, but cannot reconfigure or stop the server. By default, the control channel access is not restricted to these read-only operations.\nDNSSEC Key Manager Best practices for DNSSEC implementation require regular key rollovers. This can be a maintenance burden and is now easily automated with a new utility.\ndnssec-keymgr is meant to run unattended (e.g., under cron). It reads a policy definition file (default: /etc/dnssec.policy) and creates or updates DNSSEC keys as necessary to ensure that a zone\u0026rsquo;s keys match the defined policy for that zone. New keys are created whenever necessary to ensure rollovers occur correctly. Existing keys\u0026rsquo; timing metadata is adjusted as needed to set the correct rollover period, prepublication interval, etc. If the configured policy changes, keys are corrected automatically. See the dnssec-keymgr man page for full details.\nMany thanks to Sebastián Castro for his assistance in developing this tool at the IETF 95 Hackathon in Buenos Aires, April 2016. We also include a contributed python script, kasp2policy, to create a DNSSEC policy file from an OpenDNSSEC KASP XML file.\nParent-Child Updating (DNSSEC Information) Another task associated with the DNSSEC KSK rollover is updating the parent zone with the new DS record, or DSKEY record. Today this is usually accomplished by manually filling in a web form. After much discussion in the industry, the consensus seems to be that the most flexible automated way to do this with adequate security, is to publish the information in the child zone, using DNSSEC to ensure the integrity of the information.\nFollowing RFC 7344 Automating DNSSEC Delegation Trust Maintenance, two new resource records, CDS and CDSKEY, are created and populated automatically for DNSSEC-signed zones, on the server that is authoritative for those zones. These \u0026ldquo;child\u0026rdquo; records can be looked up by the parent, enabling the parent to update itself.\nNegative Trust Anchors When a DNSSEC signed zone fails to validate, sometimes the problem is due to a misconfiguration or maintenance problem (such as expired signatures) in the authoritative system.\nThe rndc nta command can now be used to set a \u0026ldquo;negative trust anchor\u0026rdquo; (NTA) on a resolver, disabling DNSSEC validation for a specific domain; this can be used when responses from a domain are known to be failing validation due to administrative error rather than because of a spoofing attack. NTAs are strictly temporary; by default they expire after one hour, but can be configured to last up to one week.\nIn addition, named will periodically test to see whether data below an NTA can now be validated, removing the NTA if it is no longer needed. This feature can be disabled.\nDNSTAP Dnstap is a fast, flexible method for capturing and logging DNS traffic, developed by Robert Edmonds at Farsight Security, Inc. Dnstap is supported by several open-source DNS servers, now including BIND.\nUsing dnstap enables capturing both query and response logs, with a reduced impact on the overall throughput of the BIND server than native BIND logging.\nMessages may be logged to a file or to a unix socket. Support for log file rotation will depend on which option you choose. There is a new RNDC command for \u0026ldquo;manually\u0026rdquo; rotating the log file included in BIND 9.11.\nA new utility dnstap-read has been added to allow dnstap data to be presented in a human-readable format. For more information, see Using DNSTAP with BIND.\nMinimal Response to ANY Queries Queries for ANY records are a possible abuse mechanism because they typically extract a response much larger than the query.\nThe new minimal-any option reduces the size of answers to UDP queries for type ANY by implementing one of the strategies in \u0026ldquo;draft-ietf-dnsop-refuse-any\u0026rdquo;: returning a single arbitrarily-selected RRset that matches the query name rather than returning all of the matching RRsets.\nminimal-responses takes two arguments: no-auth suppresses populating the authority section but not the additional section; no-auth-recursive does the same but only when answering recursive queries.\nThanks to Tony Finch for the contribution.\nDNS Cookies DNS Cookies, per RFC 7873, are now enabled by default. Cookies are exchanged between client and server to provide IP address identity, helping to prevent attacks using forged IP addresses. Servers enforcing cookies are less susceptible to being used as an effective attack vector for DNS DDOS attacks.\nWhen sending a query, a BIND resolver will send a cookie, and when receiving a query, a BIND authority will look for a cookie. If a valid cookie is present with the query, the BIND authority will whitelist that client for rate limiting. This has the potential to make rate limiting more effective overall because it can focus on less-trusted traffic.\nNew commands include require-server-cookie and send-cookie (both default to ON in 9.11.0). Setting a cookie-secret will enable a cluster of BIND servers to share cookies. For more information, including the potential for DNS cookies to expose EDNS compatibility problems, read DNS Cookies in BIND 9.\nDynamic Database (DynDB) Interface DynDB is a new interface for loading zone data from an external database, developed by RedHat for the FreeIPA project. (Thanks to Adam Tkac and Petr Spacek of RedHat for the contribution.)\nDynDB fully implements and extends the database API used natively by BIND. DynDB modules load data from an external data source (e.g. a database), then serve it with the same performance and functionality as conventional BIND zones, enabling you to take advantage of database features not available in BIND, such as multi-master replication. This feature provides a database backend alternative to the BIND DLZ (dynamically-loaded zones). DynDB is faster than DLZ and supports features which DLZ does not support (in-line signing).\nThe software posted at https://github.com/pspacek/bind-dyndb-ldap will be updated to version 11.0, and that version will be interoperable with this feature, providing an LDAP database backend for BIND.\nDig Enhancements The new mdig command is a version of dig that sends multiple pipelined queries and then waits for responses, instead of sending one query and waiting the response before sending the next. There are also number of new dig options:\ndig +ednsopt sets arbitrary EDNS options in DNS requests. dig +ednsflags sets yet-to-be-defined EDNS flags in DNS requests. dig +[no]ednsnegotiation enables/disables EDNS version negotiation. dig +header-only sends queries without a question section. dig +ttlunits causes dig to print TTL values with time-unit suffixes: w, d, h, m, s for weeks, days, hours, minutes, and seconds. dig +zflag sets the last unassigned DNS header flag bit. This bit is normally zero. dig +dscp=value sets the DSCP code point in outgoing query packets. dig +mapped determines if mapped IPv4 addresses can be used. DIG users may also be interested in the new, free DIG app for IOS, based on BIND 9.10.4 (currently). Find it in the App Store.\nContributed Patches We included at least 43 contributed patches in BIND 9.11.0. Some of these will have also been included in prior maintenance releases. Here are our top contributors:\nTony Finch - Cambridge University (12 patches) Tomas Hozza - RedHat (4 patches) Petr Spacek - RedHat (3 patches) Bill Parker - (3 patches) Timothe Litt - (2 patches) Plus individual patches from:\nThomas Anderson - CalTech BlueCat Networks Bradley Forschinger - Australian National University Alexander Gall - SWITCH Tobias Holgers- Amazon InfoBlox LaMont Jones - Canonical Jakub Luzny - RedHat Stacey Jonathan Marshall - Solaris Software Julian Ospald - Gentoo.org Armin Pech Xose Vazquez Perez David Ramos - Stanford University Tom Sawyer Brad Smith Dr. Jeffry A. Spain - Cincinnati Country Day School Daniel Stirnimann - SWITCH Adam Tkac - RedHat Holger Zuleger - HZNET.DE We thank all the patch contributors, as well as the many people who reported issues that we were able to fix in 9.11.\nStandards Support We try to keep up with the new Internet standards for DNS. In this release we have added support for RFC 7873, Domain Name System (DNS) Cookies. We wrote a draft standard for Catalog Zones (DRAFT), which is still under review by the standards community.\nNew DNS Resource Record Types added in 9.11:\nAVC, CDS, CDSKEY, CSYNC, NINFO, OPENPGPKEY, RKEY, SMIMEA, TALINK, URI The CDS and CDSKEY records are automatically created.\nReferences:\nRFC 7344 Automating DNSSEC Delegation Trust Maintenance (informational), added CDS and CDNSKEY RRTYPEs Child-to-Parent Synchronization in DNS, added CSYNC RRTYPE Using Secure DNS to Associate Certificates with Domain Names for S/MIME, added SMIMEA RRTYPE Application Visibility and Control (AVC) RRTYPE Using DANE to Associate OpenPGP public keys with email addresses, added OPENPGPKEY RRTYPE The Uniform Resource Identifier (URI) DNS Resource Record, added URI RRTYPE ","permalink":"/blogs/bind-release-911/","tags":["BIND","release","9.11"],"title":"Release 9.11 Adds Provisioning Options for DNS Authoritative Services"},{"categories":["DHCP","Kea"],"contents":"We are thankful that Mozilla chose to give a MOSS award to ISC to help fund development of the Kea DHCP server, through the Mozilla Foundational Technology track. This is a wonderful program, through which Mozilla gives back to the Internet community by sponsoring development of the open source that everyone can use.\nKea is modern software that we hope will eventually replace the extremely popular, but also very mature, ISC DHCP, also known as dhcpd. DHCP software is classic infrastructure. People expect that DHCP software will be available in their operating system, but few people wonder where it comes from, or know how the development is funded. Kea is already packaged for most major Linux and Unix operating systems, but is still missing a few very desirable features.\nISC\u0026rsquo;s MOSS award was for $100,000, which we will use to support design and development of a management API and a secure remote management client. Remote management is an important feature, one that is frequently requested by prospective users. We have a preliminary requirements document posted in the Kea project wiki and are starting work on a design.\nKea is already a fully functional DHCPv4 and DHCPv6 server. We have just released Kea version 1.1, which greatly expanded support for leveraging an external database for host reservations, and added a flexible client classification system. Kea is accepting community contributions on GitLab, with significant contributions enabling initial support for a Cassandra database backend and lightweight DHCPv4 over v6 in version 1.1.\nWe sometimes joke that ISC puts the \u0026ldquo;non\u0026rdquo; in \u0026ldquo;non-profit.\u0026rdquo; ISC has been funding Kea internally, and with 3+ developers and a test engineer, it is a significant effort. We are offering Kea support contracts, which we hope will eventually fund on-going maintenance, but grants like this are essential to add major new functionality, to continue to create open source infrastructure for the future. We look forward to spending this award money on adding an important feature users are asking for to the Kea open source.\n","permalink":"/blogs/isc-receives-mozilla-moss-award/","tags":["DHCP","Kea","MOSS","Mozilla","sponsor"],"title":"ISC Receives Mozilla MOSS Award"},{"categories":["DHCP","Kea"],"contents":"Kea 1.1 is available! We are pleased to announce the availability of Kea 1.1. Kea is ISC\u0026rsquo;s modern DHCP server, which brings new functionality to the datacenter and to any ISP or enterprise that needs to tie dynamic host control into external provisioning systems.\nNew features in Kea 1.1 include:\nHost Reservations Kea 1.0 contained limited support for storing host reservations in the database backend. Kea 1.1.0 has expanded that capability, allowing host reservations to be stored in a MySQL or PostgreSQL database. In particular, Kea 1.1.0:\nAdds host reservation (DHCPv4 and DHCPv6) using the PostgreSQL backend. Adds host reservation for DHCPv6 to the existing MySQL support. Significantly extends the existing host reservation capabilities to include reservations of specific DHCP options, reservations of siaddr, sname, and file fields within DHCPv4 messages, and reservations of multiple IPv6 addresses/prefixes. Allows the MySQL or PostgreSQL host reservation database to be configured read-only, in which case Kea will be able to retrieve reservations from it, but not insert or update existing reservations. This feature is useful when a database (or database view) exists for the particular deployment and the administrator doesn\u0026rsquo;t want to grant read-write access for security reasons. Client Classification In Kea 1.1 the client classification system has been expanded. A class definition contains a name and a test expression of arbitrary complexity; if the test expression evaluates to \u0026ldquo;true\u0026rdquo;, the client is a member of that class. A client may be a member of multiple classes and can acquire options from different classes. If the configuration contains multiple definitions for data for an option in two or more of the global, class, subnet, or host entries, the server will choose the definition from the most specific entry.\nThere are a number of objects and operators available for use in the test expression.\nOperators include: equal, not, and, or, substring, concat Objects include: literals: string, hexadecimal, IP address, and integer options: existence and content relay options for DHCPv4 and DHCPv6: existence and content subfields within vendor and vendor class options: existence, enterprise-id value and content selected fields from DHCPv4 and DHCPv6 packets Classes may be used to select subnets Classes and class specific subnets may contain option data to serve to clients within that class Hook Library Parameters It is now possible to specify parameters for hook libraries in the Kea configuration file. In earlier versions of Kea, hook library authors had to use a external mechanism (such as file of a known name) to pass information across.\nDHCPv4-over-DHCPv6 RFC7341 defines an architecture that allows dual-stack clients to communicate with a DHCPv4 server in IPv6-only networks. Kea 1.1 introduces support for this mode of operation. It requires running both DHCPv4 and DHCPv6 servers in special mode, where DHCPv6 component does not allocate anything, but decapsulates incoming DHCPv4 messages, sends them to the DHCPv4 server, and then relays back the responses.\nCassandra Database Backend Kea 1.1.0 has added preliminary support for Cassandra as a database backend. In this release of Kea it can only be used to store lease information; it is not able store host reservations. Cassandra support is currently considered experimental. Use with caution.\nMPL 2.0 License Kea 1.1.0 has been released under the Mozilla Public License, version 2.0. https://www.mozilla.org/en-US/MPL/2.0\nSupport Professional support for Kea is available from ISC. Free best-effort support is provided by our user community via a mailing list. Information on all of ISC\u0026rsquo;s public email lists is available here\nIf you have any comments or questions about working with Kea, please share them on the Kea Users List at https://lists.isc.org/mailman/listinfo/kea-users.\nBugs and feature requests may be submitted via GitLab at https://gitlab.isc.org/isc-projects/kea.\n","permalink":"/blogs/kea-1-1-released/","tags":["DHCP","Kea","maintenance","release"],"title":"Kea 1.1 Released"},{"categories":["DHCP","Kea"],"contents":"Database Backend Support in Kea Kea\u0026rsquo;s support for an optional external database backend is quite a useful feature. From the very early versions, Kea administrators have had the ability to store DHCP leases in either an in-memory database (written to a local disk) or, optionally, an external database backend. It can be easier to find and view lease information in a database than in a file, particularly when there are a lot of leases. Kea supports MySQL and PostgreSQL databases today. There is experimental support for Cassandra contributed by the community available in the Master branch.\nDatabase Backend Application: Host Reservations Early adopters of Kea asked for the ability to manage a table of host reservations in a database, rather than in the DHCP server itself. These users found it was easier to integrate Kea with their provisioning systems if host addresses were in a database.\nKea 1.0 included support for retrieving DHCPv4 host reservations from a MySQL backend database.\nKea 1.1 will extend that to include:\nDHCPv6 host reservations in MySQL Support for retrieving custom options along with the host addressing, and Host reservation support in PostgreSQL, equivalent to MySQL Host reservations diagram\nDatabase Backend Application: High Availability Since we knew that a database could serve as lease storage for dynamic leases for a single Kea server, and shared storage for host reservations for multiple Kea servers, it seemed logical it could also serve as shared storage for dynamic leases for multiple Kea servers. We wanted to test a high-availability deployment of Kea using a database backend that replicated the shared lease state across multiple nodes. The goal was to determine how resilient the DHCP service would be to simulated failures of different components. We chose the Community Edition of MySQL Cluster for this test, but you could use another database (PostGreSQL is also supported), or another cluster technology. As one user posted on our user mailing list: \u0026ldquo;I used MySQL (percona) in a Galera cluster running on each DHCP server to achieve HA. Worked like a charm.\u0026quot;\nLogical diagram of test network\nTesting a Database Cluster for High Availability: Two Kea Servers Sharing a MySQL Cluster Backend Our testing focused on two Kea servers connected to a single MySQL Cluster. Although everything is connected to the same network, Kea server 1 is configured to connect only to the MySQL API Server 1 and Kea Server 2 is configured to connect only to the MySQL API Server 2.\nAside from this difference the configuration for Kea Server 1 and Kea Server 2 are the same (subnets, options, etc). Both Kea servers have the same set of IPv6 addresses to hand out, (we tested exclusively with DHCPv6, but we don\u0026rsquo;t expect DHCPv4 to be any different) and are using the same method for deciding which address to hand out.\nSeparate Virtual Networks\nWe tested both \u0026lsquo;shared\u0026rsquo; and \u0026lsquo;separate\u0026rsquo; network connectivity scenarios. In the first scenario, the Kea servers and MySQL Cluster were all on the same network. This is the simplest scenario, and is what you might have if you installed the database backend on the same machine as the Kea server. We also tested a second scenario in which all components were on logically separate networks. This introduced slight delays, but for this test we did not inject significant delays. We didn\u0026rsquo;t see any difference in results, so we don\u0026rsquo;t report them separately below.\nTraffic \u0026amp; Failure Simulation\nWe put each component into a separate virtual machine to make it easier to simulate breakage with individual components. For each configuration, we first verified that a single Kea server could correctly hand out unique addresses to clients requesting them. Then we started breaking connections to the NDB Nodes, to ensure the cluster would continue working and the single Kea server would be able to continue operation. After it was clear that the single server scenario worked, we started the second Kea server, and repeated the process. Then we tried disconnecting the Management Node. After we had completed all the test cases with two Kea servers, tried an experiment with four Kea servers, two per SQL API node. For all tests related to cluster integrity and reactions to the network failures, we used the \u0026lsquo;perfdhcp\u0026rsquo; tool to generate lots of DHCP traffic. We observed the results by examining the Kea logs, and analyzing tcpdump traffic captures. (Perfdhcp is part of the Kea distribution.) We also ran DHCP protocol compliance tests, using our Forge test tool. We tested clients attempting rebinding, renewing, and releasing leases after the server they originally acquired the leases from was disconnected. The primary difference between setting up Kea to use a MySQL backend, and using a MySQL Cluster backend, is in the Kea schema set up script. Everywhere the script specifies ENGINE=INNODB; you must change it to ENGINE=NDBCLUSTER.\nResults All twelve scenarios we tested with two Kea servers were successful. We verified that clients could still get addresses after a brief pause while the cluster stabilized after the ‘breakage’ we caused by disconnecting one or the other NDB node. 1 Kea Server 1 operating, disable one NDB node. Check to see if clients are still able to get addresses. (x2) PASS 2 Kea Server 2 operating, disable one NDB node. Check to see if clients are still able to get addresses. (x2) PASS 3 Both Kea servers operating, disable one NDB node. Check to see if clients are still able to get addresses. (x2) PASS 4 Both Kea servers operating, disable one NDB node, then re-enable that node. Check to see if Kea servers resume using both nodes. (x2) PASS 5 Both Kea servers operating, disable the Management node. Check to see if clients are still able to get addresses. PASS 6 Client with an address obtained from Kea Server 1 attempts to REBIND via Kea Server 2 while Kea Server 1 is offline. PASS 7 Client with an address obtained from Kea Server 1 attempts to RENEW via Kea Server 2 while Kea Server 1 is offline. A RENEW is normally unicast to the server that originally assigned the address and includes a serverID. A DHCP server should check the server ID of a RENEW request and only respond to requests for its own ID. So, this should not work if the original server is unavailable, and it did not work. PASS 8 Client with an address obtained from Kea Server 1 attempts to RELEASE via Kea Server 2 while Kea Server 1 is offline. As is the case with the RENEW test, Server 2 should not RELEASE and it does not, in compliance with the protocol. PASS\nObservations 1 Throughout our testing we observed that if either of the MySQL nodes is disconnected, any Kea server connected via that MySQL node will stop responding to clients. Clearly the Kea server must have connectivity to a MySQL node, but it might be possible to add redundancy by enabling Kea to cycle through a list of SQL nodes, or to work with a VIP connected to multiple SQL nodes 2 All tests checking network failures showed that our cluster required a maximum of 34 seconds to get back to normal working after disconnecting any node. It should be possible to reduce the time required for the cluster to reconfigure with further tuning and experimentation. 3 We did try a more complex deployment model, with four Kea servers. In this test, we tried connecting multiple Kea servers to a single SQL node. The result was unsatisfactory. Multiple Kea servers handed out the same IP address. We ran out of time during this test to troubleshoot the problem and had to defer completing the work to prove a more complex deployment model. 4 Throughout our testing, we observed what appear to be race conditions as both Kea servers tried to allocate addresses from a shared pool in the same sequence. While this worked adequately well, except in the scenario where multiple Kea servers shared a single SQL node (#3), it is not optimal and probably decreases overall throughput. One option to mitigate this problem might be modifying the algorithm for selecting the next address.\nAddress Selection and Assignment Kea 1.0 servers will process a client’s packet exchange in the following sequence:\n1 The client sends the first message (solicit or discover) 2 Both Kea servers will choose a candidate address. Each will then query the database to see if this address is in use or not. If it is in use the Kea server will choose another candidate address and try it. 3 Kea 1.0 chooses new candidate addresses by incrementing the previously used address and will continue trying addresses until it has exhausted the pool of available addresses. (In the future Kea may provide other choices for this feature.) 4 Once the Kea server has found an address that is not in use it will generate the reply to the client (advertise or offer). 5 When the client then attempts to claim that address by issuing a request, the Kea server will attempt to insert the address into the database. 6 If the database insertion succeeds then the address wasn’t in use and the Kea server will reply to the client, confirming the assignment. If the insertion fails then the address can’t be used (it may have been handed out by the other server).\nWhen there are multiple servers inserting addresses we would expect some friction between the different servers as they may need to make more queries to find an available address. The amount of contention will depend on the topology of the network (when do the packets get to each server and to the SQL and DB nodes), the loading of the servers (how quickly they can respond to a given packet), and probably other factors.\nThere are some differences between the DHCPv4 and DHCPv6 protocols. While DHCPv6 allows sending a different address than was requested by a client in the Request message, DHCPv4 does not. Therefore in case of a lost race condition between Kea servers, DHCPv6 will be able to recover faster (by simply sending a Reply with different address) and DHCPv4 will take a bit longer (by sending NAK and causing client to go through the DORA process again).\nAlthough Kea currently supports only a simple iterative allocator, we are considering implementing alternative allocation strategies in future versions.\nConclusion Initial testing of Kea version 1.0 with MySQL Cluster v7.4.10 was conducted to determine whether Kea can be deployed in an Active/Active failover pair, without using DHCP failover. The tests were successful, showing that the database backend can provide the state replication needed for HA operation, whether the database backend was effectively local to the Kea server, or on a remote network.\nThis model has the potential to leverage database replication through the cluster to significantly improve overall resiliency of the solution.\nWe did not use Virtual IP Addressing (VIPs) in our test. It is possible that the Kea servers would not work with a VIP between them and the SQL nodes, because Kea expects a stable database IP address, but the VIP between SQL nodes and DB nodes should work, and is probably a recommended configuration. We recommend co-locating the SQL nodes with their associated Kea servers.\nWe identified several opportunities for additional testing and an opportunity for improving the efficiency of Kea in selecting available addresses in a shared lease backend deployment scenario. We are looking to extend Kea capabilities to support this deployment model in upcoming releases.\nWe are not specifically recommending the MySQL Cluster, because we didn\u0026rsquo;t test any alternatives, but if you choose to use a MySQL Cluster, you might try using the configuration we used because we know it works. We have put detailed instructions for how to set up the MySQL Cluster with Kea in our Knowledgebase.\nIf you are using a database cluster as a Kea backend, we would love to hear how you have it set up, and what success you are having. Please consider posting the details of your configuration on the Kea-users mailing list, or sharing them with us at info@isc.org.\nWe were helped in this test by Dale Rogers and James Fogg, who set up the MySQL Cluster and documented the setup in detail in MySQL Cluster set up for Kea 1.0.\nDTR Associates\nSan Francisco, CA\ninfo@dtrassociates.com\n415-377-2880\n","permalink":"/blogs/kea-with-mysql-cluster-backend-test-report/","tags":["DHCP","Kea","cluster","High Availability"],"title":"Kea With MySQL Cluster Backend – Test Report"},{"categories":["DHCP","Kea"],"contents":"A couple of IETF hackathons took place recently (Prague, Yokohama, Buenos Aires) and attracted quite a few geeks and hackers interested in various technologies. One such technology is DHCP. A handful of ISC employees attended these hackathons. We hacked Secure DHCPv6 in Prague (still experimental) and DHCPv4-over-DHCPv6 in Yokohama (on master already, will be shipped in Kea 1.1). In Berlin we decided to focus on YANG and NETCONF.\nA room full of geeks, hacking Kea, Sysrepo, and other code\nWhat are YANG and NETCONF? One of the challenges in managing a large network is that many devices and services use different configuration mechanisms. There are two problems: how to store a configuration and how to apply that configuration to a device or service. To solve this problem in a generic way, the IETF came up with two solutions: YANG and NETCONF. YANG is a data-modeling language used to model configuration and state data that was published by IETF in 2010 as RFC6020. Network Configuration Protocol (NETCONF) is a network management protocol used to deliver configuration data, standardized by IETF in 2006. Its current revision is defined in RFC4741.\nCurrently there are around 220 YANG models being defined in the IETF, which seems to generate a lot of attention. One of them is being worked on in the DHC working group. It is dedicated to DHCPv6 configuration and is available in this Internet-Draft. While the model covers all server, client, and relay agent functionalities, the current focus of the work is the DHCPv6 server. While the model is not mature enough for widespread use, it is sufficiently developed to conduct first lab trials, so we decided to give it a try.\nSysrepo and Kea Kea engineers got in touch with engineers from the Sysrepo project. We discussed several possible approaches and decided to go with an architecture of Sysrepo as the main YANG module storage that connects to Kea, using a custom-developed Sysrepo plug-in that would send the configuration and any future changes to Kea. Sysrepo would expose its the models using NETCONF interface.\nThe work being done One of the essential components of that project is Sysrepo Datastore, a repository for YANG models that can be manipulated both locally (using provided tools) and remotely (using Netopeer2 NETCONF server). Since we had the prototype YANG model defined in the draft-ietf-dhc-dhcpv6-yang, we decided to use that model as a base and import it into Sysrepo.\nThere were two crucial elements missing, though. Kea uses JSON formatted configuration files that are read from disk. To make Kea configurable using YANG/NETCONF, we had to develop: 1) some way to export data from Sysrepo to a format understandable by Kea; 2) a way for Kea to receive configuration other than reading a configuration file from disk.\nTo address the first issue, ISC and Sysrepo engineers developed a prototype plugin for Sysrepo. It\u0026rsquo;s a plugin that, when loaded, monitors the state of the YANG model and subscribes for notifications about any changes. When a change is detected, it retrieves a small subset of essential parameters that are necessary to run the basic features of Kea. It then generates a JSON structure and sends them to Kea using the control channel. This code is available on github: https://github.com/sysrepo/sysrepo-plugin-kea.\nTo address the second issue, we extended Kea with a prototype of a new command called set-config. Set-config allows delivering configuration over the command channel. For this specific purpose, we created a separate, temporary repository on github: https://github.com/isc-projects/kea-yang. Once the prototype is more mature and the code related to it is available in the main Kea repository, we will shut down the kea-yang repository.\nAs a result of those two features, we have a working prototype using the following architecture:\nKea and Sysrepo architecture\nThe prototype was able to store the YANG model and configuration for that model, export that configuration to JSON syntax that is understood by Kea and then send that configuration to Kea over the command channel. Kea was able to receive it and reconfigure itself as requested. People interested in recreating this experiment are encouraged to read the README.\nThe next steps It was an exciting experiment and very intense two days of learning, hacking, and understanding new technologies. While the solution is definitely in its experimental phase, we did learn a lot. Kea engineers now have some experience with YANG models in general and the Sysrepo way of handling them in particular. We provided some feedback and suggestions for improvements to both Sysrepo engineers and the authors working on the Internet Draft in IETF. We also know how YANG integration with Kea could be done when we decide to fully develop it. For the time being, we are not going to invest any significant engineering effort into developing it further, but we will observe the project with great interest. It seems likely that in the near future, once the technology matures, Sysrepo will be the way to provide YANG capability in Kea.\nInterested? ISC is looking for prospective and existing customers who are interested in sponsoring development of YANG/NETCONF integration with Kea. Depending on how popular this feature request is, we may decide to develop this into a full feature of production quality.\nThank you a lot to everyone who participated, contributed or otherwise helped with this effort. Special thanks go to Lukas Macko and Rastislav Szabo from the Sysrepo project for their suggestions, tips, and code snippets; to Linhui Sun from Tsinghua University for his continuous work on the RFC that defines the YANG model; and to Mikael Abrahamsson from T-Mobile for coordinating the effort.\nSee you at the next hackathon!\nTomek Mrugalski, ISC Kea lead engineer\n","permalink":"/blogs/ietf-hackathon-in-berlin-kea-and-yangnetconf/","tags":["DHCP","Kea","Hackathon"],"title":"IETF hackathon in Berlin: Kea and YANG/NETCONF"},{"categories":["DNS","BIND"],"contents":"At the DNS Summer Day 2016 [HTML, JP] held in Tokyo on June 24th it was disclosed [PDF, JP] that an unbounded zone transfer (AXFR) operation can cause operational issues due to resource exhaustion for the server that is receiving the zone transfer. On servers running BIND, Knot, or PowerDNS the receiving server may run out of memory, and on servers running NSD the server may run out of disk space. The presenter does not appear to have investigated the behaviour of any closed-source DNS implementations.\nIn hindsight this is perhaps completely obvious, albeit not to our knowledge previously written down publicly anywhere. There are no inherent limits in the AXFR protocol, and neither the AXFR specification (RFC 5936) nor the earlier \u0026ldquo;Threat Analysis of the Domain Name System\u0026rdquo; (RFC 3833) make mention of this risk.\nISC was contacted by the Japanese Computer Emergency Response Team (JPCERT) on behalf of the presenter a few months ago. Our considered opinion was (and remains) that this is an operational security issue and not a security issue within BIND itself, and we responded accordingly. That notwithstanding, we created an internal feature request ticket at the time to add mitigation features to a future version of BIND but otherwise considered the matter closed.\nOur rationale for concluding that this is an operational issue rather than a security one is that in the usual scenario where a secondary server is receiving data from a trusted party it is clearly the responsibility of the operator to ensure that their server is provisioned with adequate resources to handle the expected zone content.\nThere is an increased risk of resource exhaustion if you are receiving zone updates from an untrusted third party. However, in those circumstances a prudent design would separate out the servers that receive the zone updates from third parties from those that actually serve the zone content. Such a design would ensure that the primary DNS function (that of serving DNS records) would be unaffected in the event of the zone update system failing. In most cases it would also be trivial to disable updates from any zone source that might be causing a problem.\nAs outlined above, we consider this an operational issue, so were disappointed and surprised to discover on Wednesday that a third party had requested a CVE (Common Vulnerability and Exposures) database entry for BIND without consulting us first nor notifying us of their intent. However, in light of the more widespread knowledge of this issue we have now issued an Operational Advisory and brought forward our plans to release the mitigation feature such that it goes into the next maintenance release for all supported versions of BIND.\nThe presenter has offered patches for BIND and other servers; however, we don\u0026rsquo;t consider these an adequate mitigation as there are other mechanisms by which a third party could submit updates to a zone, e.g. IXFR incremental zone transfers (RFC 1995) or DNS UPDATE (RFC 2136), through which they could overwhelm a server\u0026rsquo;s resources. It is expected that the mitigation feature will be configuration options for limiting both the number of records in a zone and/or the memory consumption of a zone, but by default these will be left unlimited. Whilst we\u0026rsquo;re still working on the details of our implementation, we anticipate that in the event of a breach of those limits that BIND would continue to serve the existing zone contents.\n","permalink":"/blogs/the-potential-operational-risks-associated-with-untrusted-zone-transfers/","tags":["DNS","BIND","AXFR"],"title":"The Potential Operational Risks Associated with Untrusted Zone Transfers"},{"categories":["DNS","BIND"],"contents":"ISC published BIND under a very permissive open source license nearly two decades ago, and we have been maintaining it ever since. In December we announced we were changing the license for our Kea DHCP server to the modern and widely used Mozilla Public License (MPL 2.0). The MPL 2.0 license requires that if you make changes to licensed software (e.g. BIND) and distribute them outside your organization, that you publish those changes under that same license. It does not require that you publish or disclose anything other than the changes you made to our software. (Read about it at tl;dr Legal https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2))\nRecently, we announced we were considering changing the BIND license as well, in order to best preserve that software for the long term.\nISC is the primary developer of and organizational steward for BIND, but many others have contributed to creating and sustaining this community resource. Over the years, numerous users and developers have tested BIND and contributed bug reports, fixes and improvements. Expert users have given their time to support less-experienced users on the public bind-users mailing list or have published free resources and tools for use with BIND. Of the 100 or so organizations that provide ISC\u0026rsquo;s financial support, a substantial number have been doing so for nearly a decade. BIND is a large and complex open source project. The BIND development team today consists of only 4 full-time software engineers, two test engineers and a manager, a fraction of what would be deployed for a comparable commercial product. We wouldn\u0026rsquo;t have been as successful as we have been in maintaining BIND as a competitive and full-featured system without all this community support.\nThe reason for considering the software license change is to encourage continued community contributions to the BIND project, particularly from those who benefit most from the free open source, vendors incorporating BIND into commercial products and services. We consulted with numerous stakeholders in making this decision, including our operating system partners, recent/frequent patch contributors, major financial supporters, fellow open source DNS developers, ISC support customers, and the bind-users mailing list.\nWe received a lot of thoughtful feedback. By far, most commenters welcomed this change or said they didn\u0026rsquo;t anticipate any impact. A few would prefer we retain the current ISC license. Several people recommended we go even farther than MPL 2.0 and consider the more restrictive GPL license. But some important stakeholders strongly requested we avoid the GPL license and refrain from inventing a new open source license. A few technical contributors pointed out opportunities for us to improve our commitment to, communication with, and acknowledgement of, technical contributors.\nWe appreciate all of this feedback. We considered it carefully. Although we are aware that the MPL 2.0 license may not require the contributions we are hoping for, it seems to be an acceptable middle ground between the current ISC license and the GPL license. We have decided to move BIND to the Mozilla Public License (MPL 2.0).\nThis change will apply to BIND 9.11.0 and future versions. We will add the MPL 2.0 license terms as we update or change files, beginning with the 9.11.0 beta version. Older versions already published under the ISC license will remain under the ISC license. We will be highlighting the change in our release notes and in the release announcements. In addition, we will be updating our contributor guidelines so technical contributors are aware of how their contributions will be licensed. We are considering other changes to the way people contribute code changes. We do not plan to add a contributor agreement, based on the significant feedback we received against it.\nThose unsure whether or not the license change affects their use of BIND, or who wish to discuss how to comply with the license, may contact ISC at info@isc.org or use our contact form. If you or your employer use BIND and you want to see the project continue as a strong and healthy system, please consider contributing, if you do not today. We welcome donations of any amount at our donations page. Although you may see news items about large foundation grants to some open source projects, most long-term open source projects struggle to raise adequate financial support and the effort we have to put into raising money detracts from the effort we can put into the software.\nWe welcome comments from BIND users, about this or anything else we do. Email Vicky Risk, Product Manager at vicky@isc.org or discuss on bind-users@lists.isc.org.\n","permalink":"/blogs/bind9-adopts-the-mpl-2-0-license-with-bind-9-11-0/","tags":["DNS","BIND","license","licensing","MPL","MPL 2.0","open source"],"title":"BIND 9 Adopts the MPL 2.0 License with BIND 9.11.0"},{"categories":["DNS","BIND"],"contents":"ISC published BIND under a very permissive open source license nearly two decades ago. Since then, it has become the world’s most widely used nameserver. Partly, this is due to the popularity of commercial services and software based on BIND. While this is welcome, and these commercial products are undoubtedly adding value, unfortunately not all of these users are giving back to the open source \u0026ndash; either by contributing code or supporting the core team financially.\nThis is unfair to those organizations that are sustaining the open source BIND, a significant and valuable community asset. ISC is the organizational steward for BIND; in order to preserve the software for the long term, we are considering a move to the more restrictive Mozilla Public License (MPL 2.0). (read about it at tl;dr Legal https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2))\nThe MPL license requires that anyone redistributing the code who has changed it must publish their changes (or pay for an exception to the license). It doesn’t impact anyone who is using the software without redistributing it, nor anyone redistributing it without changes - so most users will not see any change.\nIn the event we do proceed with the change in license, it will take effect with the BIND 9.11.0 release.\nWe welcome comments from BIND users, including statements of support or concern. Email Vicky Risk, Product Manager at vicky@isc.org, tweet at us at @ISCdotORG, or discuss on bind-users@lists.isc.org.\nJeff Osborn, President of ISC, announcing we are considering this change at RIPE72 in Copenhagen May 26th:\nhttps://ripe72.ripe.net/archives/video/206/ ","permalink":"/blogs/isc-considering-a-change-to-the-bind9-license/","tags":["DNS","BIND","license","licensing","MPL","MPL 2.0","open source"],"title":"ISC considering a change to the BIND 9 license"},{"categories":["DNS","BIND"],"contents":"ISC’s BIND DNS server software is renowned for its rich feature set and compliance with standards but not for its performance.\nTo allow us to focus on BIND’s performance we have recently invested significantly in new hardware in our test lab for dedicated 24/7 performance testing, and have also developed a new web-based system for automated performance tests.\nWhilst we’ve had automated tests running in the past, they weren’t flexible, they didn’t run frequently enough, and the results were often skewed by high intrinsic variability. With this new system our developers can run on-demand tests against experimental code branches before merging those branches into our “master” code branch, and our QA staff can benchmark each planned release against previous versions to ensure that we are not inadvertently creating performance regressions.\nThe system maintains a list of build configurations that are each tested in turn, starting over again once every configuration has been tested. Instead of the single weekly run that we used to perform we now test each configuration two or three times daily. The system can be scaled horizontally by adding more test machines, allowing us to increase test capacity whilst maintaining centralised control.\nFor each build configuration the users can set up a wide variety of options including which code branch (or tag) to extract from “git”, whether BIND is running as a recursive or authoritative server, etc.:\nTo measure average mean throughput we start BIND and then start up Nominum’s dnsperf on a separate server 30 times for 30 seconds at a time, calculating the mean and standard deviation over those runs. We ignore the first dnsperf run for statistical purposes so as to get a better picture of “steady state” performance without the figures being skewed by start-up effects such as filling the cache when running recursive tests.\nEach configuration above also has options to control the set of zones to be loaded into the server (for authoritative tests) and the set of queries to be sent from “dnsperf”. To emulate various authoritative domain hosting models we have zone configurations such as:\nThe root zone 1000 small zones with a single A and AAAA (e.g. a low volume domain hoster) 1M small zones (e.g. a large domain hoster) A single zone with 1M A / AAAA records A single zone with 1M NS delegation records (e.g. a TLD) For recursive tests we send queries that target a separate authoritative server that holds both the aforementioned single zone with 1M delegations and each of the 1M small zones referred to therein.\nAll build and test output is logged and captured for display and graphing and for statistical analysis. The screenshot below shows the historic results for one configuration. Users can drill down through this UI to see the complete dnsperf output for every test run. A snapshot of BIND’s memory usage is taken every five seconds and can be graphed too.\nWith this system in place we have been able to measure an 11% improvement in recursive performance with the introduction of “adaptive read/write locks” in BIND 9.11.0a1. We have also identified particular issues with BIND’s authoritative performance when returning a referral response (such as from the root zone or from a TLD) and our efforts are now concentrated on improving that. The tests on delegation zones have also showed the importance of the ‘acache-enable: yes’ setting for such zones, typically improving performance by 50%.\nWork continues on this system to reduce some remaining variability in results to allow even very small changes in performance to be measured. We also plan to add support for testing our DHCP products (ISC dhcpd, and Kea).\nLater updates to this BLOG post:\nDNS-OARC 26, Madrid Presentation on Perf Lab\nISC\u0026rsquo;s Perf Lab tool shared on github\n","permalink":"/blogs/isc-performance-lab/","tags":["DNS","BIND","performance"],"title":"The ISC Performance Lab"},{"categories":["ISC","Company News"],"contents":"ISC has joined the Open Source Initiative as an Affiliate Member. ISC has developed and supported open source systems for many years. We are delighted to join forces with other open source advocates and developers to help bring a unified voice to our growing movement.\nQuoting from OSI\u0026rsquo;s press release today:\n“There are only a few organizations that can truly be credited for building the networked society we live in today. ISC is one of those organizations,” said Patrick Masson, General Manager of the OSI. “ISC has been cultivating the ideals of the Open Source Initiative from even before the OSI existed, and we are honored that they have joined us in our work to promote and protect open source software.”\nThe OSI Affiliate Member Program, available at no-cost, allows non-profit and not-for-profit organizations—unequivocally independent groups with a clear commitment to open source—to join and support our mission to raise awareness and adoption of open source software and to build bridges among different constituencies in the open source community. Affiliate Members participate directly in the direction and development of the OSI through Board of Director elections as well as incubator projects and working groups that support software freedom. OSI membership provides a forum where some of the world\u0026rsquo;s most successful open source software leaders, projects, businesses and communities engage through member-driven initiatives to, promote and protect open source software, while also to extending and improving their own open source efforts through co-creation, collaboration and community.”\n\u0026mdash; https://opensource.org/node/805\n","permalink":"/blogs/isc-joins-the-open-source-initiative/","tags":["license","licensing","open source"],"title":"ISC Joins the Open Source Initiative"},{"categories":["BIND","DNS","DHCP","Kea"],"contents":"On February 16th, news reached ISC about CVE-2015-7547, a serious defect in the getaddrinfo() library call in glibc. This defect allows an attacker to cause buffer overflow to occur, creating the possibility of remote code execution in some circumstances. ISC has been asked by several of our customers and partners to comment on whether this vulnerability should be of concern to operators using our products.\nAs general advice\nOf course this should be of concern to anyone operating a system which uses glibc. A common system call containing a bug which potentially permits remote code execution calls for immediate patching and the best solution is to immediately seek a patched version of glibc which has been secured to prevent this vulnerability.\nSpeaking specifically about ISC\u0026rsquo;s products and their exposure\nIn response to requests from our customers, we have examined our BIND, ISC DHCP, and Kea products to assess the risk this vulnerability poses to customers using those products to provide services.\nConcerning BIND \u0026ndash; the named nameserver daemon which is the heart of BIND does not make use of the getaddrinfo() function from the system library in any way which we believe can be exploited. Instead it performs resolution using its own code (including a function in named with the same name, but which is different from the getaddrinfo() from the system library). However, the system library\u0026rsquo;s getaddrinfo() call is used by several utility programs (e.g. dig, delv) which are distributed as part of the BIND package. Despite the fact that nameserver risk exposure is minimal, fixing the system libraries is still strongly recommended. Concerning ISC DHCP \u0026ndash; ISC DHCP can make use of getaddrinfo() in some circumstances. We do not believe the risk to DHCP operators to be high but fixing the system libraries as soon as possible is still strongly recommended. Concerning Kea \u0026ndash; the Kea DHCP server does not make use of getaddrinfo(). Some of the unit tests which are distributed with the source code can use getaddrinfo() but the risk to Kea server operators is minimal. We still recommend that you fix your system libraries as soon as possible because of all of the other components of a system that may make use of this common library call, but Kea itself should not be at risk. Static and dynamic linking\nOn most systems, programs which are built as part of the ISC products are configured to be dynamically linked, making use of system libraries which are found at runtime. However, it is possible to deliberately build statically linked binaries in which library routines are incorporated into the binary images produced at compile time. If you have built statically linked versions of ISC programs you must fix your system library first and then rebuild and relink the ISC products to ensure that you are now using the corrected library. Most users, however, are using dynamically linked libraries (which are the default build option) and need only fix their libraries and then stop and restart the ISC product \u0026ndash; re-linking is not usually necessary for those who have built using the default options.\nConcerning countermeasures\nThe CVE-2015-7547 announcement contains, as do a number of additional articles which are circulating concerning the vulnerability, a list of recommended countermeasures intended to prevent successful exploitation of the defect. Among the recommendations are such proposed countermeasures as blocking UDP packets that are larger than 512 bytes, limiting the permitted size of TCP replies, and discouraging the use of EDNS0. ISC would like to advise customers that we recommend caution when applying such measures as they can interfere with DNS operations, causing problems of their own.\nResponses that require more than 512 bytes are legal and normal in DNS and proper EDNS0 operation is important for DNSSEC validation and other critical DNS functionality.\nWe continue to recommend that the best course of action is to replace the vulnerable library on affected systems but if you must, for some reason, adopt any of the countermeasures we counsel closely monitoring your DNS for signs of problems which may be created or exacerbated by the specified measures.\n","permalink":"/blogs/a-few-words-about-the-glibc-vulnerability-cve-2015-7547/","tags":["CVE","glibc","getaddrinfo"],"title":"A Few Words About the glibc Vulnerability, CVE-2015-7547"},{"categories":["ISC","Company News","Annual Report"],"contents":"We are proud of another year of solid, responsible maintenance of our three core software products, BIND, ISC DHCP and Kea.\nBIND 2015 Releases We issued four BIND9 maintenance releases and twelve security patch releases. In addition we made five releases of our -S edition for subscribers and two experimental releases. The last two types of releases incorporate both features and bug fixes.\nMaintenance releases: 9.9.7 \u0026amp; 9.9.8; 9.10.2 \u0026amp; 9.10.3 Security releases: 9.9.6-P2, 9.9.7-P1, 9.9.7-P2, 9.9.7-P3, 9.9.8-P2, 9.10.1-P2, 9.10.1-P2, 9.10.2-P1, 9.10.2-P2, 9.10.2-P3, 9.10.2-P4, 9.10.3-P2 Subscription releases (incorporate both features and bug fixes): 9.9.6-S3, 9.9.7-S1, 9.9.7-S6, 9.9.8-S1, 9.9.8-S3 Experimental releases (for testing the Resolver DDOS mitigation features): 9.9.6-EXP1, 9.9.6-EXP2 New Feature Development In BIND 9.10.3 we released two significant new features, ‘fetches per zone’ and ‘fetches per server’ which enable resolver rate limiting. These features have proven useful for mitigating DDOS attacks effects on resolvers. (/blogs/tldr-resolver-ddos-mitigation/) We worked on the new features which will be released in the upcoming BIND 9.11 version, incorporating a significant dyndb driver patch submitted by RedHat. Other features developed include dnstap support, nxdomain-redirect, two features addressing IPv6 bias, a message-compression option, transfer-message-size and miscellaneous improved statistics counters. We published an IETF draft on the design for Catalog Zones, a significant feature we are adding in BIND 9.11. We previously published requirements documents for these features in Google Docs to solicit feedback from users. Other Accomplishments We finalized the BIND DNSSEC Guide and added a very concise DNSSEC for BIND Quick Reference Guide. We updated our root hints file to reflect the new addresses for H Root, which was renumbered in December, 2015. We began giving accounts on our issue tracking system to open source packagers, giving them access to their own reports in our system. This is a fairly limited step towards greater openness, but we need to also preserve the privacy of issue reporters who sent in reports to a closed system with a reasonable expectation that it would remain closed. Maintenance \u0026amp; QA We opened 625 new issues in BIND. We resolved 55% of the new issues opened, for a total of 486 (including some older issues). The result was we added to our backlog of issues. We don’t have an accurate breakdown of the proportion of these issues that are feature requests vs defect reports, but we know that at least 12% of the reports are enhancement requests. However, when you focus on the most important issues, those rated as Severity 0 (critical) or 1 (High), we resolved many more than were opened. We opened 50 new issues and resolved 70 existing issues. 16% of the tickets we resolved were high severity. We use the free Coverity open source static analysis extensively and display the current Coverity status of each of our major software packages on the ISC website. As of February 1, 2016, the Coverity open source scan was showing 0 BIND defects. We fixed 40 Coverity-identified defects in BIND in 2015. After we received a reported security vulnerability uncovered using the AFL fuzz testing tool, we developed our own tests using the AFL tool and have made it a regular part of our regression process. Despite this, we subsequently got at least one additional bug report someone outside ISC discovered, again using the AFL tool. Four new system test groups were added to BIND 9; (ednscompliance, fetchlimit, rpzrecurse, and statschannel). Approximately 194 new system tests were introduced. We did extensive testing of 5011 DNSSEC Key rollovers with BIND, in preparation for a possible root key rollover. Security We issued seven BIND CVEs. Several organisations have asked us whether the \u0026ldquo;GHOST\u0026rdquo; vulnerability in the glibc library has an impact in BIND. After examining the code, we think that the core BIND is immune to the problem; however, the contributed MySQL DLZ driver is vulnerable. We were also asked about some OpenSSL vulnerabilities. We tested and verified an updated version of OpenSSL and recommended users update. We had several security vulnerabilities reported just as we entered the holiday period. The release dates of the fixes were the subject of considerable internal discussion because we know that holiday freeze periods are common. In 2016 we are planning to publish a policy on security announcement timing during holiday periods. ISC DHCP 2015 Releases We issued five DHCP maintenance releases in 2015.\nMaintenance releases: DHCP 4.1-ESV-R11 \u0026amp; DHCP 4.1-ESV-R12, DHCP 4.2.8, DHCP 4.3.2 \u0026amp; 4.3.3 (plus 10 beta and release candidate versions) We discontinued support for ISC DHCP version 4.2 after publishing DHCP 4.2.8 Development The 4.3.3 release incorporated over two dozen changes to the contributed LDAP code. Although reviewing and integrating all of these patches was a significant job for us, we appreciate the community contributions. In release 4.3.3 we added a compile time option that we believe will improve performance significantly for most users. We also fixed an issue that caused the lease queues to be incorrect. This could manifest during failover as an inconsistency between the two servers causing one of them to appear to run out of addresses. We published an article on DHCPv4 server performance, explaining where the bottlenecks are, and suggesting some ways to improve performance. Maintenance \u0026amp; QA The backlog of DHCP issues shrank modestly in 2015. We opened 124 new issues and resolved 50% of them, resolving 133 issues in total. At least 18% of these issues were feature requests, the remainder being defect reports. We also reduced the backlog of the most severe (S0 \u0026amp; S1) issues, we opened 14 new issues and resolved 17 existing issues. As of January 22nd, 2016, ISC DHCP had 22 defects listed in Coverity Scan, for a low defect density of .18. We fixed 14 Coverity defects in 2015. We transitioned our on-going continuous integration testing from Robie to the newer Jenkins system. This has enabled us to add a number of additional operating systems to our regular regression, and will eventually allow us to provide public visibility to our build status. We are now running continuous regressions against: Centos 6.x (32 bit), 7.x (64 bit), Debian 7.x (64 bit), Fedora 20 (32 \u0026amp; 64 bit), FreeBSD 10.1 (32 \u0026amp; 64 bit), HPUX11 (64 bit), RHEL 6.s (64 bit), Solaris 11 (32 bit), Ubuntu 14.04 LTS (32 \u0026amp; 64 bit). Security We did not have any significant security vulnerabilities in ISC DHCP to report in 2015, although we got one at the end of the year which was disclosed in early 2016. We checked DHCP for the GHOST problem (see above) and do not believe that it poses any security vulnerability.\nKea 2015 Releases We issued two Kea feature releases in 2015, Kea 0.9.2 and Kea 1.0. New Feature Development Kea 0.9.2 added statistics, logging and a remote control channel, features required for ISC to provide technical support for production deployments. Kea 1.0 was a major update, adding lease expiration, basic client classification, DECLINE support in both DHCPv4 and DHCPv6, PXE and iPXE boot support, and limited support for storing DHCPv4 host reservation information in MySQL. Other Accomplishments We began offering professional technical support subscriptions for Kea, and got our first customer, a large ISP who used Kea in a public wifi service offering. We created a prototype Secure DHCP implementation with the ISC DHCP client which was demonstrated at the IETF Hackathon in Prague. We received a significant community contribution, not yet complete, implementing DHCPv4 over DHCPv6. Facebook did another public presentation on how using Kea to virtualize DHCP services for their massive internal datacenter simplified operations. (https://www.isc.org/blogs/how-facebook-is-using-kea-in-the-datacenter/) At the end of 2015, with the release of Kea 1.0, we changed the open source license agreement. We decided to move to the Mozilla Public License (MPL 2.0)(https://www.isc.org/blogs/kea-license-2-0/). Our goal was to ensure that Kea development will be supported in the future by those people who use and extend it. Maintenance \u0026amp; QA Unlike our other software, Kea uses an open bug database that anyone can review.\n204 defects were identified in 2015, and 143 were closed. We closed a total of 346 tickets in 2015, including 124 enhancements and 70 tasks, in addition to the defects. As of February 1, 2016, Coverity Scan showed 121 defects in Kea. This equates to a defect density of .47, average for a project of Kea\u0026rsquo;s size. The Kea team fixed 96 defects identified by Coverity in 2015. The Kea continuous integration testing is done using Jenkins automation. The current test status is always visible at https://jenkins.isc.org Security We did not have any significant security vulnerabilities in Kea to report in 2015, although we got one at the end of the year which was disclosed in early 2016.\nSummary Statistics Software Software Releases Issues Opened in 2015 Issues Resolved in 2015 Coverity Defects Fixed/Density Security Vulnerabilities BIND 9 23 625 486 40/0 7 ISC DHCP 5 124 133 14/.18% 0 Kea 2 204 143 96/.47% 0 ","permalink":"/blogs/2015-isc-open-source-accomplishments/","tags":["2015","annual report","results"],"title":"2015 ISC Open Source Accomplishments"},{"categories":["ISC"],"contents":"Do you work for an open-source \u0026ldquo;free-loader\u0026rdquo;? This week is a great time to assess what open source your organization uses, and make sure you are doing \u0026ldquo;your share\u0026rdquo; to support these community resources. All tech companies use open source, but not all of them support it. Everyone loves leveraging open source, but not every organization shows that love in a practical way: https://nedbatchelder.com/blog/201511/funding_free_software.html\nI was pleased to discover that Mozilla has recently done an open source inventory, and put together a plan to support it.\nOn December 10th Mozilla made the first of their \u0026ldquo;Open Source Support\u0026rdquo; awards, handing out $503K of $1 million in awards. (Read about it here.)\nSurprisingly few tech companies, who presumably recognize the cost and value of creating software, donate money to open source projects. In 2015, ISC got exactly two significant corporate donations, from Craigslist and from Facebook (thank you!). I am sure that there are many more tech companies out there who rely on our DHCP and BIND DNS software, and most of them don\u0026rsquo;t support this open source at all. Of course it is easy to assume that all open source has some other means of support: a foundation, or adjunct products that generate revenue, or \u0026ldquo;other users\u0026rdquo; who contribute.\nISC is not only a publisher of open source, we are also a consumer. When I think about the open source that we use at ISC, a small, 35-person company, I can come up with quite a list without much effort.\nWe aren\u0026rsquo;t perfect either: we support a few of these projects financially, we support a few of them technically, and some of them, we don\u0026rsquo;t support at all:\nBest Practical\u0026rsquo;s RT issue tracker (we pay for support) (https://bestpractical.com) WordPress website software (we pay for some plug-ins and ecosystem products but don\u0026rsquo;t donate to the maintainers at https://automattic.com) as well as wiki software (http://twiki.org) and Trac (https://trac.edgewall.org) Jenkins continuous integration software (https://jenkins.io/), other development tools, such as the Clang analyzer (http://clang.llvm.org) and git (https://git-scm.com) The popular BIND-users and DHCP-users mailing lists are powered by Mailman (https://www.gnu.org/software/mailman/) and we use Zimbra for our internal email system (https://www.zimbra.com) We rely heavily on Etherpad for internal meeting notes (https://etherpad.org/) We use multiple Unix and Linux operating systems, most of which which we contribute code to, C, C++ and Python, and we use MySql and PostGres, among other open source components and libraries, in the open source software we publish I personally use the Adium xmpp client (https://adium.im) and some of us use Apache Open Office desktop software (https://www.openoffice.org) ISC is non-profit, but we are at least able to pay our software engineers a living wage, and provide (depending on their location) health insurance. If your employer is profitable, and able to pay you a living wage, consider advocating for contributions to the open source projects you use in your job. Even small contributions ($100? $500?) will make a difference to these programs and are still a bargain for software you use every day.\n","permalink":"/blogs/resolve-to-become-more-of-a-contributor-less-of-a-free-loader-in-2016/","tags":["open source","contribute"],"title":"Resolve to Become More of a Contributor, Less of a Free-Loader, in 2016"},{"categories":["ISC"],"contents":"In addition to all the work we do\u0026ndash;maintaining BIND 9 and ISC DHCP, developing Kea, and operating F-Root and our subsidized SNS and hosting services\u0026ndash;ISC employees make a number of contributions to other open source projects and Internet organizations.\nInternet Organizations ISOC - David Farber, one of ISC\u0026rsquo;s Directors, is a current Trustee of the Internet Society. Jeff Osborn, ISC President, is on the ISOC Advisory Council. Leah Symekher, ISC Account Manager is San Francisco Bay Area ISOC Chapter President. ISC as an organization is a \u0026ldquo;Gold\u0026rdquo; member of ISOC.\nIETF - ISC sends 4 or 5 engineers to every IETF meeting and our technical staff participate vigorously in the development of new standards throughout the year, authoring, revising or contributing to 23 different drafts in 2015. Tomek Mrugalski, the lead developer of our Kea DHCP server, is co-chair of the IETF DHC working group. Ray Bellis, Research Fellow, is co-chair of the IETF Homenet working group. Jim Martin, ISC Director of Operations, has led the team that builds the IETF network for well over a decade, and for each of its 2015 meetings.\nDNS-OARC - Ray Bellis is taking up a post on the OARC program committee in January. ISC as an organization is a \u0026ldquo;Gold\u0026rdquo; member of DNS-OARC.\nUKNOF - Stephen Morris, Sr. Director of Engineering, is a member of the Advisory Committee, and Cathy Almond, our Lead Technical Support Engineer, is co-chair of the Programme Committee for the UK Network Operators Forum. Ray Bellis is also a member of the programme committee, but is stepping down in 2016.\nICANN - Jim Martin is Root Server System Advisory Committee representative for F-Root; Suzanne Woolf, a consultant to ISC, is alternate. Suzanne is also RSSAC liaison to the ICANN board. Ray Bellis is a member of the RSSAC Caucus.\nOpen Source Projects Dibbler - Tomek Mrugalski developed this DHCPv6 server before joining ISC, and he continues as lead developer, issuing version 1.0.1 in 2015.\nEvldns - Ray Bellis developed this library and framework for small fast DNS servers while at Nominet. He continues to maintain it on github and assist other researchers in using this tool, while working at ISC. Currently, APNIC is using Evldns for some DNSSEC experiments.\nGIMP - Mukund Sivaraman is one of the current developers of the GNU Image Manipulation Program.\nNetBSD - Jeremy Reed, QA Manager, is Vice President of the NetBSD Foundation and serves on the Board of Directors for the Foundation. He is also on the Board of Directors for the BSD Certification Group, Inc.\nOpenDNSSEC - Francis Dupont, software engineer, has contributed to the SoftHSM component of OpenDNSSEC.\n","permalink":"/blogs/isc-contributions-to-internet-organizations/","tags":["open source","community","ISC","non-profit"],"title":"ISC Contributions to Internet Organizations"},{"categories":["DHCP","Kea"],"contents":"In the past, ISC has released all of its open source software under a simple license that we created. It\u0026rsquo;s called \u0026ldquo;The ISC License\u0026rdquo;; see https://en.wikipedia.org/wiki/ISC_license.\nThe software world has changed in the years since we wrote the ISC license. Hundreds of lawyers have been involved in the writing and editing of open source licenses. There are many and varied licenses in use around the internet. The Open Source Initative maintains a list of them here: http://opensource.org/licenses/category where, as you can see, the ISC License is identified as \u0026ldquo;uncategorized.\u0026rdquo;\nISC is getting ready to release V1.0 of our new Kea DHCP software, and we have decided to release it under a more modern and more widely used license, namely the Mozilla Public License 2.0. The Mozilla Public License 2.0 has exactly one important difference from the ISC License: if you modify our code, and distribute the modified code to others, you have to distribute the changes you made to our code under the same license terms under which you received it. That distribution will include us, which gives us the opportunity to incorporate your changes into our source.\nBefore you read any further here, perhaps you should go look at the Quick Summary of MPL2.0 published by tl;drLegal: https://tldrlegal.com/license/mozilla-public-license-2.0-(mpl-2) or the FAQ published by the Mozilla Foundation: https://www.mozilla.org/en-US/MPL/2.0/FAQ/ or the license itself at https://www.mozilla.org/en-US/MPL/2.0/.\nThere is no single reason why ISC is making this change. Our goal of providing high-quality free Internet software will never change. We want to make sure that our source code is available to everyone and that they can use it pretty much however they want. As commercial use of open-source software continues to increase, we see increased demand for a well-known license that lawyers are familiar with and already understand. There is no longer a good reason for ISC to have its own license, separate from everything else, even though we had and were using our license years before most of these other licenses existed. Almost certainly the ISC License was one of the first few published open source licenses, and it\u0026rsquo;s hardly changed in all those years.\nIf a company uses our software but improves it, we really want those improvements to go back into the master source. It will make the software better for everyone. The MPL2.0 license requires that if you make changes to licensed software (e.g. Kea) and distribute them outside your organization, that you publish those changes under that same license. It does not require that you publish or disclose anything other than the changes you made to our software. This requirement is enough to ensure that our public source will be improved by your work, but not so much that you might be tempted to call the license a \u0026ldquo;virus.\u0026rdquo;\nIt was almost 30 years ago that we wrote the first version of what would evolve into the ISC license. The application that Kea will replace - ISC DHCP - is more than 20 years old. We hope that Kea, licensed under the more-modern MPL2.0, will serve as well for the next 20 years.\n","permalink":"/blogs/kea-license-2-0/","tags":["open source","DHCP","Kea","license","licensing","MPL","MPL 2.0"],"title":"Kea to be released under Mozilla Public License 2.0"},{"categories":["ISC","DNS"],"contents":"http://h.root-servers.org/renumber.html\nThe Internet DNS root nameservers are the servers which are authoritative for \u0026ldquo;.\u0026rdquo;, the apex of the DNS namespace. They are the starting point for resolving all public names. If you\u0026rsquo;re running a recursive server, you\u0026rsquo;re either relying on a built-in set of \u0026lsquo;root hints\u0026rsquo;, or you will have configured them manually.\nSo isn\u0026rsquo;t renumbering one of these vital servers a big deal for all Internet users?\nActually, no, it isn\u0026rsquo;t. There are 13 Internet root servers, many of them multi-server provisioned via Anycast. DNS resolution will still work as long as there is at least one root server accessible. Moreover, because of the way the DNS resolution protocol is designed to include a process known as \u0026lsquo;root priming\u0026rsquo;, recursive servers are going to learn and start using the new addresses almost immediately anyway.\nH-Root can change its addresses, and if you do nothing, your DNS will carry on working.\nEven better - if you are running ISC BIND with the default built-in root hints, and you\u0026rsquo;re upgrading regularly, you can rest assured that the next production versions of BIND to be released after 1st December 2015 will have updated H-Root\u0026rsquo;s addresses.\nWe have a Knowledgebase article that explains how \u0026ldquo;root priming\u0026rdquo; works: https://kb.isc.org/docs/aa-01309\nIf you are running your recursive servers with a manual root hints configuration, then we do recommend that soon after 1st December 2015, you update the H-Root addresses to accommodate this change. If you don\u0026rsquo;t, and you\u0026rsquo;re running BIND, you will start to see some warnings being logged periodically that will look a little like:\n02-Dec-2015 10:21:32.196 general: warning: checkhints: h.root-servers.net/A (198.97.190.53) missing from hints 02-Dec-2015 10:21:32.196 general: warning: checkhints: h.root-servers.net/A (128.63.2.53) extra record in hints (There will also be warnings regarding H-Root\u0026rsquo;s AAAA records.)\nAdministrators running with built-in root hints will continue to see the same warnings until they can upgrade to a new version of BIND that has the updated addresses.\nOnce you\u0026rsquo;ve upgraded BIND, or have updated your own manually-configured hints file, the logged warnings should cease.\nEven if there is a significant delay in your upgrade or reconfiguration, your DNS resolution will still not break due to this H-Root change. The combination of root-priming and the built-in redundancy on the Internet name space (13 root nameservers) both provide a high degree of resilience.\nIn summary:\nYou may see some warnings in your BIND logs on or after 1st December 2015 - these don\u0026rsquo;t mean that your DNS resolution is broken. Only administrators using manually-configured root hints need to update their configuration. The change doesn\u0026rsquo;t need to be immediate or timed to coincide exactly with the update to the distributed root hints. Even if you\u0026rsquo;re unable to upgrade to a new version of BIND until after the old H-Root addresses are shut down, your DNS will not break. Keep calm, and carry on\u0026hellip;\nCathy Almond, Senior Support Engineer\n","permalink":"/blogs/h-root-will-change-its-addresses-on-1-december-2015-what-does-this-mean-for-you/","tags":["DNS","H-Root"],"title":"H-Root Will Change Its Addresses on 1 December 2015. What Does This Mean for You?"},{"categories":["BIND","DNS"],"contents":"With the recent spate of patch releases of BIND due to security issues, I thought that it was worth putting fingers to keyboard to shed some light on the sources of these problems and what ISC is doing about them.\nISC has a formal process for handling reports of security bugs. If we think the reported issue is serious enough, we will issue a release of the software containing the fix, and a security advisory explaining the problem. Although underlying reasons can be different, many of these advisories report the cause as the issue \u0026ldquo;triggering an assertion in BIND, after which BIND exits.\u0026rdquo; So what are assertions, and why do they cause BIND to crash?\nWhen writing BIND 9, the authors were very mindful of security. They considered a security hole that allows a compromise of the machine on which the name server is running (for example, by allowing remote code execution) to be worse than one that causes the program to exit. A flaw that allowed an attacker to control the information returned in response to queries opens the way for fraudulent and illegal activities; process termination, although a denial of service, was judged to be less harmful.\nTo catch programming errors that could lead to such a compromise, BIND was created using a \u0026ldquo;design by contract\u0026rdquo; paradigm. In this approach, assertions are made throughout the code as to the state of variables at certain points. If these are violated, BIND will do a controlled termination rather than continuing with possibly corrupted data. There are thousands of such assertions scattered throughout the BIND source code. Note that the assertions are not concerned with the data received by BIND (either in the form of DNS requests or responses) - BIND has no control over that. Instead, assertions are made about the way the program is operating.\nA good example of this is the problem (CVE-2015-5477) that resulted in the release of BIND versions 9.9.7-P2 and 9.10.2-P3 in July 2015. Here, a function in BIND required that the variable into which it was going to place a pointer to data should be NULL (i.e. empty), and included an assertion to that effect. The reasoning for this was that in order to use the function, the programmer should be aware that the contents of the variable will be modified, and should signal that awareness by ensuring that it was NULL before calling the function. Passing a non-NULL variable to the function might indicate that the programmer had made a mistake and that the variable was already pointing to valid data. Should this be the case, overwriting the variable might lead to invalid data being used and a possible compromise. For this reason, BIND was written to terminate should that state of affairs occur.\nIn this particular case the variable had been set to NULL and the function called. However, a corner case (which amazingly, seems never to have been reached in 15 years of use) required the returned data to be discarded and the function called a second time. The error lay in the fact that the variable was not reset to NULL before the second call. As there was no way for BIND to tell that the content of the variable was pointing to discarded data, as opposed to valid data, BIND took the safer course and terminated.\nThe next question, of course, is why that testing didn’t catch that bug, and this is something we have asked ourselves. Actually, since the bug was in all versions of BIND 9 (which was released in 2000), a good question is why it wasn\u0026rsquo;t seen in everyday use. It is believed that BIND is the most widely used name server on the Internet and so all instances combined handle a vast number of queries every day. Multiply that by fifteen years…\nThe answer is that for most of the time, all these instances of BIND handle queries and responses whose format conforms to the DNS standards. So the code paths executed by BIND in normal use are a subset of those possible. Any errors in those paths will be quickly discovered. Typically the errors resulting in security alerts lie in little-used paths that handle ill-formed queries and responses, and the bug lay in one such path.\nAll ISC code is reviewed before being included in a release. It is also run through static analysis tools that help identify issues such as these. Unit tests and system tests are also run on the code. ISC testing (and testing by others) does include the sending of malformed queries and responses to BIND, but the number of possibilities is vast and testing did not explore all cases.\nIt is at this point that \u0026ldquo;fuzzers\u0026rdquo; come in and, in particular, the \u0026ldquo;American fuzzy lop\u0026rdquo; (AFL) program. Fuzzers generate incorrect packets and send them to the program under test (in this case, BIND) to try to cause it to fail. AFL is effective in doing this because it monitors the code coverage in the program under test and varies the packets so as to maximize the amount of code executed. The vulnerability described above was discovered using AFL, as was one of the vulnerabilities (CVE-2015-5722) that triggered the September 2015 release of BIND 9.9.7-P3 and 9.10.2-P4.\nDiscovery of that latter vulnerability was enabled by a recent enhancement to AFL that significantly increased the rate at which fuzzed packets were tested. Following that report, ISC ran exhaustive tests with the AFL, picking up several non-critical bugs (all corrected in the 9.9.8 and 9.10.3 releases of BIND). ISC staff discovered the second security vulnerability during a painstaking visual examination of code similar to that in which the bugs were found; it was subsequently picked up by AFL during additional testing with the fuzzer. In view of its effectiveness, ISC has now incorporated AFL into its BIND test suite.\nDr. Stephen Morris, Sr. Director of Software Engineering\n","permalink":"/blogs/summer_security_vulnerabilities/","tags":["DNS","BIND","security","CVE"],"title":"What is a BIND Assertion Failure?"},{"categories":["BIND","DNS","Security"],"contents":"As the security incident manager for this particular vulnerability notification, I\u0026rsquo;d like to say a little extra, beyond our official vulnerability disclosure about this critical defect in BIND.\nMany of our bugs are limited in scope or affect only users having a particular set of configuration choices. CVE-2015-5477 does not fall into that category. Almost all unpatched BIND servers are potentially vulnerable. We know of no configuration workarounds. Screening the offending packets with firewalls is likely to be difficult or impossible unless those devices understand DNS at a protocol level and may be problematic even then. And the fix for this defect is very localized to one specific area of the BIND code.\nThe practical effect of this is that this bug is difficult to defend against (except by patching, which is completely effective) and will not be particularly difficult to reverse-engineer. I have already been told by one expert that they have successfully reverse-engineered an attack kit from what has been divulged and from analyzing the code changes, and while I have complete confidence that the individual who told me this is not intending to use his kit in a malicious manner, there are others who will do so who may not be far behind. Please take steps to patch or download a secure version immediately.\nThis bug is designated \u0026ldquo;Critical\u0026rdquo; and it deserves that designation.\n","permalink":"/blogs/about-cve-2015-5477-an-error-in-handling-tkey-queries-can-cause-named-to-exit-with-a-require-assertion-failure/","tags":["DNS","BIND","security","CVE"],"title":"About CVE-2015-5477"},{"categories":["BIND","DNS","security"],"contents":"Early in 2014 a couple of our BIND support customers told us about some intermittent periods of very heavy query activity that swamped their resolvers and asked us for help. It emerged that these were just the first signs of a long series of similar DDOS (Distributed Denial of Service) attacks that began in early 2014 and are continuing today around the Internet. After a year of experimentation and revision, we are ready to release some BIND features that have been proven to help with this kind of attack. This article explains how we got here. It is a long narrative: if you just want to know about the new features, skip to the end and look at the references listed.\nThe Attack Although the DDOS has evolved a bit over the past year or year and a half, the broad outlines have remained the same. Actually, this type of attack was first described publicly in Beijing in 2009 at a DNS-OARC meeting. The attacker enlists a network of compromised endpoints, such as home gateways, to issue queries for non-existent sub-domains of real domains. The domains queried might be querty213.example.com and 3409uaf.example.com. This is not an asymmetric amplification attack, but a straightforward flood of queries. Because the sub-domains being queried don\u0026rsquo;t exist, no answers are cached in the network and every query generates a request for information to the authority for the domain. The authoritative server becomes overwhelmed and unresponsive because of the unusually high load (or because it has deployed rate-limiting techniques against perceived attackers, which is likely to include some large ISP recursive servers), and the resolvers holding open queries become bogged down waiting for answers and re-querying. Other researchers have reported that the attackers have switched from using large networks of home gateways to using fewer query generators. They also suspect that in some cases, the attackers are now targeting the resolvers purposely (called \u0026ldquo;sandwich\u0026rdquo; attacks).\nEarly Experiments It didn\u0026rsquo;t take long to figure out what was happening, so we started looking for a good mitigation solution.\nWe needed to balance multiple objectives:\nlimit the resolver resources expended on handling abuse traffic avoid further adding to the load on the authoritative server that was the victim of the attack continue to handle valid queries from legitimate users, dropping or delaying as few of them as possible First, we tried for a simple solution. We implemented a hold down timer triggered when a server failed to respond X (configurable) times, and gave it to operators under attack to see how that worked. This wasn’t sensitive enough to work with resolvers that were responding intermittently, because each reply reset the \u0026lsquo;failed to respond\u0026rsquo; counter to zero. The intermittent responses might have been because the authoritative servers were not completely overloaded, or it may have been that we were experiencing the effect of their server administrators deploying Response Rate Limiting algorithms.\nMessing Around with Quotas After we decided that the hold-down timer would not be an adequate mitigation, we started looking at why the code that already exists to handle the backlog of recursive clients wasn\u0026rsquo;t managing the situation as well as we expected that it should. The client limit is implemented with a default of 1000, but configurable using the option recursive-clients. A user-configured value for recursive-clients provides a hard limit (as configured), and a soft limit (100 lower than the hard limit). When a new client query is received, if the hard limit has already been reached, BIND simply drops the client query. On the other hand, if the soft limit is reached, then named accepts the new query, but at the same time looks for the oldest outstanding query already in the backlog, and sends back a SERVFAIL to that one instead. Why was this system of backlog management not helping?\nWe realized three things:\nThe default limit of 1000 is purely a hard limit - so those admins who had not increased the limit did not have a soft quota at all and new client queries were being dropped when the backlog was full. Configuring limits larger than ~3500 would cause named to accept more client queries into the backlog, but named often then exhausted other resources that hadn\u0026rsquo;t scaled up correspondingly, resulting in SERVAILS to clients and dropped responses when the query-handling failed internally. Configuring limits of ~3500 and lower was still insufficient protection for the busiest servers (ones with an already high query rate. This is because the \u0026lsquo;bad\u0026rsquo; query rate could be high enough that the backlog turnover was faster than the response time for some \u0026lsquo;good\u0026rsquo; authoritative servers - so \u0026lsquo;good\u0026rsquo; client queries weren\u0026rsquo;t surviving for long enough in the backlog before being aborted in favour of new queries. We experimented with a configurable soft-client-quota setting, but reverted back to an automated value, when we realized it was not actually possible to have a large number of \u0026rsquo;early discards\u0026rsquo; in progress. We updated the default recursive-clients setting, and went looking for a better solution.\nRate-Limiting by Zone Next we considered max-clients-per query. When many clients simultaneously query for the same name and type, the clients will all be attached to the same fetch, up to the max-clients-per-query limit, and only one iterative query will be sent. Limiting clients per query wasn\u0026rsquo;t going to help with the type of traffic we were seeing in this DOS attack as all the queries are unique. The queries were being sent for the same domain(s), so we decided to rate limit by zone. This was configured using a new option, fetches-per-zone, which defines the maximum number of simultaneous iterative queries to any one domain that the server will permit. We released fetches-per-zone in an experimental BIND build that we offered to support customers, and anyone who agreed to give us feedback, and waited for results.\nFetches-per-zone seemed to work for some users, but not for others. Fetches-per-zone was less effective in helping hosting providers and others with many zones per server. Even if you limit the fetches per zone, when you had multiple zones under attack on the same server the server would still be overwhelmed. So, that feature did not adequately protect the victim in a shared-server environment. We thought we probably needed a fetches-per-server, which would be harder to implement in BIND.\nLooking for Better Ideas When we first learned about the upswing in these attacks, it seemed as if the impact on resolvers was unintentional. We were reluctant to give prospective attackers useful information about the opportunity for DDOSing resolver operators, so we tried to keep quiet about the impact of these attacks on resolvers. This of course made it much harder to get data and get feedback on our experimental features. We didn\u0026rsquo;t have enough service providers testing the software and providing feedback. We had already spent months working on this problem, and we still didn\u0026rsquo;t have a really effective solution.\nWe decided to hold a forum at the regular IETF meeting in July of 2014 to ask for advice. We were still concerned about encouraging attackers, but we couldn\u0026rsquo;t afford to spend more time on trial-and-error approaches. We invited BIND users and developers and users of other DNS systems. Even though it was organized at the last minute, a lot of people came and contributed. Some of them were already dealing with this DDOS problem in their own networks, others were hearing about it for the first time.\nConsulting the Community By the time of the IETF meeting (July 2014) we had implemented the hold down timer, soft quotas and fetches-per-zone. All of these had limitations.\nAt the meeting, we talked about how to rate limit the traffic, based on per zone or per server. There was general support for the rate-limiting approach. Wouter Wingaards of NLNET Labs shared what he was implementing in Unbound (rate limiting queries per server) and said we should definitely look at making that adaptive, so that the number of simultaneous fetches per server adjusted based on the authority\u0026rsquo;s responsiveness. We also discussed how to best drop traffic once the resolver\u0026rsquo;s limits were reached. Our soft-quota feature dropped the oldest waiting query in favor of the newest. We thought that maybe a random early drop policy would work better. We had a few other ideas as well. A whitelist for fetches-per zone could protect popular zones from rate-limiting. A reusable single socket for all queries to a particular authority would protect the resolver, and the authority, but would do nothing to preserve the backlog of valid queries. We left the IETF meeting thinking that a new fetches-per-server option would give the best results, and that we needed a feature that would automatically detect when the server was overloaded and adapt, rather than enforcing a static limit.\nAnother outcome of that meeting, for me at least, was the happy realization that smart, informed people in the community were perfectly willing to contribute their expertise. Nobody criticized us for not having all the answers. \u0026lsquo;Competing\u0026rsquo; developers were facing the same problem and were willing to share ideas. This meeting probably indirectly led us to the agreement, later in the fall of 2014, between ISC and NLNET Labs to offer technical support for the NLNET Labs DNS servers.\nTesting Resolver Rate-limiting in Experimental Releases The fetches-per-server feature, implemented in late summer 2014, monitors the ratio of timeouts to successful responses from authoritative servers; when the ratio rises above a specified threshold, the number of simultaneous fetches that will be sent to that server is automatically tuned downward. When the server recovers and the timeout ratio drops below another threshold, the fetch limit is automatically tuned higher. The parameters for this tuning can be configured using fetch-quota-params.\nBecause BIND has a large installed base, spanning from the very small enterprise operator to large ISP and carrier users, we can\u0026rsquo;t release experimental features widely until we are sure of their efficacy and have verified that they cause minimal collateral damage. We had a dozen or so service providers try out our new features and several provided detailed feedback. Jazztel, a Spanish ISP, gave us permission to use some charts from their production network in public presentations. Fetches-per-zone was very effective in their environment. That encouraged us to go out and speak at conferences about the relative success we were seeing, and to invite other operators to request the experimental version of BIND.\nResponding to Rate-limited Clients At the conferences, we got some great questions that helped us to refine our approach further. At one meeting, Geoff Huston, Chief Scientist at APNIC, asked us whether we were sure we were sending back the right response to the clients we were rate-limiting. That led us to add a knob to specify whether to quietly drop the rate limited traffic or reply with a SERVFAIL. (SERVFAIL is the DNS message that means the server failed to respond, which will prompt the client to retry.) Either would be OK for ordinary clients, but it would be bad to fail to respond to another server that is forwarding, because effectively you’d be passing along to it, the bad impact of the DDoS. SERVFAIL is the ‘correct’ response per the DNS protocol – but some administrators may prefer ‘drop’ as the client behavior is going to be similar (will likely retry the query) anyway, so why send a response that will just be ignored, and which is using up your network bandwidth.\nWe also heard updates from other vendors about what they were doing to address the continuing DDOS problem. Nominum, for example, was committed to a blacklisting approach, derived from mining the data they collected from some of their customers. That wasn\u0026rsquo;t really an option for ISC because we didn\u0026rsquo;t have the data to create the blacklist.\nBIND 9.10.3 We have decided which new features to keep and release into the general distribution, and which to remove. We will be releasing fetches-per-server, fetches-per-zone, and the drop/SERVFAIL policy knob in regular BIND releases beginning with BIND 9.10.3 (off by default, of course). We have deprecated the soft client quota and defaulted the SERVFAIL caching to off. Since this will now be in general production use, we have added more counters and statistics on rate-limited traffic. We have also put quite a bit of labor into maintaining RPZ, which is used for implementing blacklists.\nLonger Term We are planning further refinements to both per server and per zone query rate limiting in BIND 9.11. We are hoping to be able to make DDOS mitigation more effective by leveraging cookies, once there is wider support for multiple EDNS options. We think aggressive negative caching, a new IETF proposal (https://datatracker.ietf.org/doc/draft-fujiwara-dnsop-nsec-aggressiveuse/?include_text=1), has a lot of promise. We are interested in exploring other ideas for mitigating DNS DDOS on a network, rather than server-level, and for enlisting help from connected providers in combating DDOS upstream from the victim.\nThe effort spent on developing, testing, and releasing DDOS mitigation features over the past year has come at the expense of many other things we wanted to do for BIND users. We have mitigated some of the damage and helped operators stay up to fight abuse another day. The DDOS-ers are not really deterred, but for now, they are somewhat less successful. We have got some incremental controls in BIND, and a renewed sense of shared fate with everyone working to keep the Internet safe for legitimate traffic. We are very grateful to the community of BIND users and network operators who shared data on the attack and who worked with us to test counter measures.\nYour next steps are:\nIf you want to learn more about this, look at the slides from the recorded webinar on the topic, given by Cathy Almond, ISC’s Sr. Support Engineer and Support Team Lead, https://www.isc.org/presentations, or view the recording, https://youtu.be/x52OAye0sXg. If you are not already participating on the bind-users@lists.isc.org mailing list, consider joining, or at least checking the archives occasionally Consider becoming an active beta tester for BIND and contributing your operational experience to improve support for DDOS mitigation and other features. BIND 9.10.3 should be available for beta testing in August 2015. References Documentation on using the experimental features in ISC\u0026rsquo;s KB\nRecent IETF draft Aggressive use of NSEC/NSEC3 that could substantially short-circuit this DDOS.\nPresentations November 13, 2014 LISA ’14, DNS Response Rate Limiting, a Mini-tutorial – Eddy Winstead, Sales Engineer February 3, 2015 NANOG 63, Pseudo-random Domain DDOS – Eddy Winstead, Sales Engineer\nMarch 4, 2015 Apricot 2015 – Random DNS Query Attack and Mitigation Approaches – Eddy Winstead, Sales Engineer\nMarch 10, 2015 Netnod Spring Meeting – Tales of the Unexpected, Handling Unusual Client Query Behaviour – Cathy Almond, Sr. Support Engineer\nMay 9, 2015 OARC 2015 Spring Workshop – Update on experimental BIND features to rate-limit recursive queries – Cathy Almond, Sr. Support Engineer\nJuly 8, 2015 ISC Webinar, Random Sub-domain Resolver DDOS Mitigation – Cathy Almond, Senior Support Engineer\n","permalink":"/blogs/tldr-resolver-ddos-mitigation/","tags":["DDOS","QRL","rate limiting","security"],"title":"Resolver DDOS Mitigation"},{"categories":["BIND","DNS","Performance"],"contents":"Introduction As part of an ongoing study into DNS server performance, we wanted to establish a baseline figure for the absolute maximum throughput that can be achieved using standard APIs. To this end we have developed a tiny DNS server that does nothing except echo the received packet back to the client, albeit with the “QR” bit flipped to indicate that the packet is a response and not a query.\nThe DNS echo server allows the user to specify how many times it should fork into separate processes, and how many threads (if any) to start within each process. It also supports a number of I/O models, including standard “blocking” I/O and non-blocking I/O using explicit poll(2) or select(2) calls, or via libevent. For multi-core systems it also supports locking processes or threads to a specific CPU core.\nWe have a test rig comprising a server machine with dual Intel Xeon X5355 quad-core processors running at 2.66 GHz and a client machine with dual Intel Xeon X5365 quad-core processors running at 3.0 GHz. The machines are connected via a 10 Gbps switch using Intel 82599ES-based Network Interface Cards. Both machines are running Fedora Server 21 with kernel 3.19.7-200 and the query generator is Nominum’s dnsperf 2.0.0 as packaged by Fedora and using the included sample query file.\nUntuned Tests The following graph shows the mean, minimum, and maximum throughput obtained for different numbers of server processes running with the blocking I/O model. Each data point is the result of ten 30-second runs of dnsperf using its default command line parameters. A bare minimum tuning was done on the UDP read/write buffers on both client and server by having the sysctl variablesnet.core.rmem and net.core.wmem both set to a maximum and default value of 16 MB each.\nIn general the throughput is relatively constant on all three measures, although an explanation for the sudden fall-off when running 7 or 8 server processes in parallel is not immediately obvious. The otherwise flat throughput figure is suggestive that the client itself is limiting the performance. It should also be noted that the variability in the results is quite significant - the results are by no means deterministic.\nClient-Side Tuning In order to eliminate client-side effects from the testing a variety of dnsperf settings were tried, eventually settling empirically on apparently optimal settings of -c 8 to make dnsperf act as multiple clients, and -q 500 to allow for 500 maximum outstanding requests. The defaults are -c 1 -q 100. It was also determined that using taskset to restrict dnsperf to using only one of the two client CPUs was more optimal. The reason for the latter remains unclear.\nFor two or three concurrent server processes both the average and maximum throughput are substantially increased (the latter by 45% to 348 kqps from a previous high of 239 kqps). The variability, however, is even further increased for some tests than before. It should also be noted that the increase in the maximum throughput further reinforces the theory that the untuned dnsperf parameters used to generate the first graph were themselves limiting the maximum throughput of the system as a whole.\nTo explain the increased variable requires looking further into the network card architecture. The 82599ES NICs automatically configure themselves with eight separate TxRx queues, each with its own interrupt (or “IRQ”) number. Under Linux the association of each TxRx queue with CPU cores is automatically handled by the irqbalance service, and by default it assigns each TxRx queue in sequence to one CPU core.\nThe number of IRQs that have been handled for each combination of TxRx queue and CPU core can be examined in /proc/interrupts. As each 30-second run of dnsperf starts the counters start incrementing, but at most only 3 individual queues were observed getting involved in handling the network traffic, and therefore only 3 CPU cores.\nDuring each dnsperf run the queue assignments vary, as do the CPU cores allocated to the DNS echo server itself. As a result, just occasionally a highly optimal (or highly suboptimal) combination of queues and cores arises. The lower performance when 4+ server processes are in use requires further investigation, as does the apparent limit of only 3 queues being used. The 82599ES cards do permit finer-grained “per flow” balancing of incoming traffic across the TxRx queues and that may be looked at in a later study.\nServer-Side Tuning To further eliminate test variability the irqbalance service was disabled and every TxRx queue was locked to CPU core 7 by executing the following command for each IRQ number associated with each TxRx queue:\n# echo 80 \u0026amp;gt; /proc/sys/27/smp_affinity … # echo 80 \u0026amp;gt; /proc/sys/34/smp_affinity The graph below shows the result of allowing the kernel to move the DNS echo server processes randomly between CPU cores, but with only core number 7 handling the network traffic:\nThe results now show significantly less variation than before, although the maximum throughput recorded has dropped back to levels below those seen in the second graph.\nThere is, however, still an observable peak when only two server processes are used. The conclusion must be that in general some combination of two cores is better than any other combination of cores in use.\nLet’s therefore examine what happens if we explicitly use just one server process and also explicitly lock that to each CPU core in turn:\nNow things are starting to look really interesting! We see equal results (more or less) for cores 0, 2, 4 and 6; lower but equal results for cores 1 and 3; a very high result (albeit not the highest yet seen) for core 5; and then coming in second is core 7 itself (the one also handling the NIC interrupts).\nWith core 7 being the last core on the second CPU, and cores 0, 2, 4 and 6 all being on the other CPU, it’s unsurprising that no difference is observed on those cores.\nWhat is perhaps more surprising is that cores 1, 3 and 5 don’t give equal results. The answer lies in the specific CPU architecture in use. Although the X5355 is described as a quad-core processor, it is actually two dual-core dies in one package. Each pair of cores shares a 4 MB Level-2 cache and there’s no shared cache between the two pairs of cores, as shown in this diagram produced by the lstopo program:\nWith all network traffic being handled by core 7 (PU P#7 in the diagram above) the most optimal (single core) performance is found by forcing the “closest” CPU core to handle the DNS echo server. Letting core 7 do all the work is less efficient.\nThe lower overall performance of cores 1 and 3 is slightly puzzling. Our current theory is that there is more contention for bandwidth on the memory bus when the two separate dual-core dies both need to access the system-wide shared memory than when cores from different packages need to.\nGoing back to the earlier observation that some combination of two cores is optimal, further tests were run to find out exactly which particular combination that is, as shown below:\nThe best combination is when the DNS echo server is run both on the same core as the one handling the network traffic and on the other core from the same die that shares the same L2 cache. This particular combination reached over 390 kqps - some 45 - 65% higher than the results found for most other combinations.\nThe core 1 and 5 combination is believed to be so much better than combining cores 1 and 3 simply because core 5 already has access to the packet data store in its shared cache with core 7. For those wondering about the combination of cores 3 and 5, its results were the same as for cores 1 and 5, as expected based on the CPU architecture.\nFor completeness, tests were run on various combinations of three or four cores. The combinations of core 1 or 3 running alongside cores 5 and 7 was about the same speed as cores 5 and 7 by themselves. The combination of four processes running cores 1, 3, 5 and 7 was significantly slower, as were various other ad-hoc combinations of four or more processes split between the two CPUs.\nConclusion Performance testing is a black art. It is highly dependent on many variables, including CPU architecture, network card facilities, interrupt behaviour, and CPU core affinity.\nTo produce reliable results requires a significant level of understanding of the systems hosting the tests, the benchmarking software that is exercising those systems and the software under test itself. It is especially important to ensure that you are not simply measuring the limitations of the benchmarking system, as seen in the first graph.\nIf your DNS server software provides tuning parameters (especially relating to how many processes or threads it will use), it is vital that you benchmark against your own expected traffic patterns and various combinations of those parameters until you find the values that work best for you on your own systems.\nWe plan to incorporate the lessons learnt above in our internal BIND performance regression test suite, giving us greater confidence in our own test results.\n","permalink":"/blogs/benchmarking-dns/","tags":["benchmarking","DNS","dnsperf","performance"],"title":"Benchmarking DNS Reliably on Multi-core Systems"},{"categories":["DHCP","Kea"],"contents":" Angelo Failla, Production Engineer at Facebook, shares some thoughts on why the company chose Kea for their DHCP needs.\nWhy did Facebook need a new DHCP solution? We use DHCP for provisioning servers in our production datacenters. We use it both for bare metal provisioning, (to install the operating system) and to assign addresses to the out-of-band management interfaces. Our old system was based on ISC DHCP and static configuration files generated from our inventory system. We loaded them into our DHCP servers using a complex svn/rsync based periodic pipeline, restarting the DHCP server(s) to pick up the changes.\nThis took longer than we wanted. At our scale there are a lot of parts being added or replaced all the time (both NIC cards and servers). The DHCP servers were spending more time restarting to pick up the changes than serving actual traffic. In addition to that the reconfiguration pipeline was slow. Sometimes the changes would propagate very slowly (~3 hours), slowing down repair times in the datacenters.\nIn short, we wanted a faster way to bootstrap hardware in our datacenters after maintenance or expansion.\nFacebook has very high standards for availability for all our datacenters. For redundancy we had two physical DHCP servers (in an active/standby configuration), in each cluster of servers. The problem was if we lost both DHCP servers in a cluster, the cluster lost DHCP completely. We wanted a more flexible approach, where every DHCP server in the network would be able to serve requests coming from any machine.\nA few years ago we replaced all of our hardware load balancers with software load balancers based on Linux IPVS and Proxygen (which we open-sourced). We decided we were going to do something similar for DHCP. We created a virtual cluster of DHCP servers, with individual instances distributed around the network. We are using Anycast and BGP to address these DHCP servers with a single set of addresses. This allowed us to simplify our cluster/datacenter bootstrapping processes and have better recovery in case of local failure of both DHCP servers in a cluster.\nWhy did Facebook decide to use Kea? We liked the fact that Kea is modern software and is designed to be extensible. Kea has hook points where you can add your own logic to parse incoming DHCP packets and modify them as you like right before they leave the server network interface. We leveraged the hooks feature extensively to customize Kea to meet our requirements.\nAlthough one of Kea’s big advantages over ISC DHCP is that Kea is dynamically reconfigurable, we didn’t care about that. We wanted to centralize as much configuration data as possible, and run a stateless DHCP service. We planned to deploy in Tupperware (our Linux Container technology, roughly equivalent to Google\u0026rsquo;s Borg). We didn\u0026rsquo;t want to package huge configuration files with the application, nor did we want to maintain this data in multiple places on the network. What we have developed is simple and fast to deploy: we just install the Kea binary with a very basic configuration file and then it fetches all the rest of the information dynamically from our inventory system. We maintain the client configuration information, such as host allocation, subnets, etc. centrally in our inventory system. This simplifies DHCP server deployment and on-going configuration maintenance.\nHow hard was it to design and implement this stateless system? It wasn’t hard. The Doxygen documentation on the Kea website was very clear. I liked that the docs came with a section that discussed the hook API using a simple library example.\nWhen we started using Kea, there was no support for network boot options, so we wrote an external C++ hook module to do this. Having the possibility to inject your own logic into the server gave us a lot of new possibilities: suddenly we were able to integrate the server with the rest of our infrastructure. Things like submitting stats to our monitoring system and having nice dashboards, sending logs to our Scribe infrastructure, alarming, and working around bad DHCP clients were now possible without writing any Python glue code.\nKea also helped us write a workaround to bypass some TFTP firmware issues we encountered when we turned up our first ever IPv6-only cluster last summer.\nHow long did this project take? We started working with Kea in February 2014. It took around 1 month to get a proof of concept hook deployed in a single cluster: 1 week was spent playing with the vanilla version of Kea and reading docs, another 2 weeks was spent writing and benchmarking middleware C++ code to talk to our inventory system, the final week of the month was spent turning that code into the actual first hook library.\nIt took roughly 2 months to productionalize our setup and deploy it in a few production clusters, integrating things with our provisioning infrastructure and fixing various bugs/issues.\nBy the end of month 4 (end of May 2014) we were serving all of our DHCP clients using Kea! Since then we have been improving our hook library, refactoring things, dealing with new requirements etc.\nWhat was the result? We now use hooks to request server configuration information dynamically from our inventory system, instead of generating static configurations. We can bring up or reload a server cluster faster. With our old system, it could take hours for changes to propagate down to the DHCP servers. With this new stateless design, we have been able to eliminate DHCP server reloading and get the overall time to propagate changes in our inventory system down to 1 or 2 minutes.\nWe also have significantly simplified the routing. Now DHCP servers anywhere on the network can respond to any client request on a shared anycast address. This has simplified our setup; we use DHCP agents running on our top of rack switches, so using a single global anycast IP in their configuration saves us time when bootstrapping new clusters or data centers. We have improved cross-cluster redundancy also eliminating the need for using hardware load balancers.\nWe also have beautiful dashboards and metrics which help us in troubleshooting provisioning issues.\nWhat suggestions and observations do you have for others? For datacenter applications, I highly recommend using a stateless DHCP approach. Keep state and configuration data separate from the application, handing it over to external systems. It simplifies configuration management and server deployment. Re-use existing open source solutions, such as Kea, and try to fight the “not invented here” syndrome. Write something from scratch only if what is available doesn\u0026rsquo;t satisfy your requirements. And when you do, try to open source it. About Angelo Failla\nAngelo is a Production Engineer for Facebook in the Dublin, Ireland Cluster Infrastructure team working on Cluster/DataCenter automation. Angelo has over 10 years experience as a system engineer, and over 15 years experience as a programmer. He uses his coding skills (bash/python/C++ and others) to improve operations at Facebook with automation and auto-remediation. Angelo has spoken at PyCon Ireland 2014, SREcon15 Europe, and DevOps Italia 2015. Originally from Sicily, Angelo’s own Facebook page is dominated by pictures of food. Angelo can be reached via Twitter: @pallotron, LinkedIn, and of course, Facebook.\nAngelo Failla gave a talk on the DHCP Infrastructure at Facebook at the SREcon 15 conference in Dublin. Both slides and a video recording are posted.\n","permalink":"/blogs/how-facebook-is-using-kea-in-the-datacenter/","tags":["Facebook","Kea","datacenter"],"title":"How Facebook is Using Kea in the Datacenter"},{"categories":["BIND","DNS"],"contents":"We at ISC want to encourage networking people around the Internet to focus attention for a few minutes on an obscure topic, EDNS compliance.\nThe EDNS (Extension mechanisms for DNS) protocol allows us to add new features to DNS that were not envisioned when DNS was originally specified. RFC1035 fixed the size of the DNS message and its components back in 1987. Since then, there have been a number of very good reasons for extending the DNS protocol suite. Support for EDNS is technically optional, but as the applications relying on EDNS become more critical, widespread support for EDNS is increasingly important. That application designers and network operators cannot yet rely on EDNS working end to end is becoming a critical problem for the Internet.\nHow widespread is EDNS support today? EDNS is currently supported on better than 90% of all DNS servers ISC surveyed recently (research report). EDNS was designed to allow further extensions to be easily deployed. If only one end of a connection is EDNS-aware, the extensions don\u0026rsquo;t work, but the basic DNS protocol should be unaffected. EDNS specifies how to do EDNS version negotiation and how to handle unknown EDNS options and unknown EDNS flags (by ignoring unknown EDNS options and flags). This has enabled us build support for EDNS across the Internet gradually without disrupting the DNS.\nOne of the most widely-deployed features that relies on this mechanism is support for the DNS Security, or DNSSEC extension. Systems that support DNSSEC validation can positively determine that the DNS data they are sending to the client was \u0026lsquo;signed\u0026rsquo; by the owner of the zone. The demand for DNSSEC validation has made EDNS support much more important and probably helped drive adoption. However, the reliance on DNSSEC, particularly among those segments of the market that have adopted it broadly, including .gov and the country-code top level domains, also makes it even more important that EDNS works reliably.\nThis all sounds like success! What\u0026rsquo;s the problem? The percentage of DNS servers on the Internet that support EDNS drops significantly from 90% with some support, to 60 - 85% when you look at full compliance. The percentage of systems with compliance problems varies among different types of domains. Top level domains, such as countries, and the DNS root, have the best compliance at approximately 85%. This is probably because they are required by ICANN to support DNSSEC. (The requirement is being phased in as cc-TLDs renew their agreements with ICANN, so it is not yet required of 100% of the TLDs.) The Alexa .gov servers, which also have a high DNSSEC deployment rate, have approximately 70% full EDNS compliance. The Alexa top and bottom 1000 servers scored the lowest, with only about 60% of servers passing our EDNS compliance tests.\nPartial EDNS support can cause big problems We have identified a number of different problems amongst the EDNS servers with partial support. For more details, please read the most recent compliance report. We saw a range of compliance problems, including:\nOPT only returned when DO=1 is present in the request BADVER not returned to EDNS (1) NOTIMP returned when a EDNS option is present FORMERR returned when a EDNS option is present BADVERS returned when a EDNS option is present NOTIMP returned when a EDNS Z flag is present FORMERR returned when a EDNS Z flag is present BADVERS returned when a EDNS Z flag is present EDNS option echoed back OPT not returned in truncated response EDNS (1) queries being dropped EDNS queries with a Z bit being dropped EDNS Z bits in queries echoed back TCP response size limited to EDNS UDP response size Truncated UDP response when send when response will not fit Fragmented responses being blocked DO=1 not returned by DNSSEC aware servers Systems that nominally support EDNS sometimes do not recognize that EDNS is supposed to support multiple extensions to the DNS and fail when they see any new, unknown extensions. These \u0026lsquo;broken\u0026rsquo; DNS servers do not handle unknown EDNS versions, flags and options correctly, returning unexpected result codes and option values. This triggers the systems querying them to retry by sending a plain (non-EDNS) message, effectively blocking the use of any EDNS application.\nAs we add more applications that rely on EDNS, partial compliance can end up resulting in failures with increasingly significant impact. Partial EDNS compliance is particularly a problem for systems that support the DNSSEC extension. If a server that has a signed zone is misclassified as not supporting EDNS the resolver will not get the DNSSEC records required to validate the answers from the secure zone. This will have the result that a client of a validating resolver may no longer be able to access some network resources. We don\u0026rsquo;t want to \u0026rsquo;trigger\u0026rsquo; a situation that causes DNSSEC-signed properties to become unreachable.\nDNS Cookies present a new opportunity in the war against DDOS Here we come to the current problem. We need to deploy another important EDNS application.\nRecently, we have rediscovered another important application for EDNS, DNS cookies. DNS cookies, like web site cookies, enable two machines to recognize each other after an initial communication. Machines perpetrating abuse and attacks on the Internet will not exchange cookies and will not want to be identified and tracked. Using cookies to quickly identify non-abusive communications will enable DNS systems to identify suspected abuse traffic and defend against attacks much more efficiently. DNS cookies are not a new idea, the original proposal was made in 2006, but the IETF has only recently revived the draft and the current version is now approaching standardization. Because denial of service attacks are becoming an increasingly significant problem in the DNS, we want to enable this technique as soon as practically possible. ISC began shipping an experimental implementation of DNS cookies with BIND 9.10.0 in March of 2014.\nThe EDNS Cookies application is less forgiving of EDNS errors. Most current use of additional EDNS extensions is between consenting parties (NSID, EXPIRE). DNS COOKIES is a exception to the current usage patterns. A DNS COOKIES option is expected to be added to every QUERY providing additional DNS spoofing resistance to both the server and the client when the server supports DNS COOKIES.\nWhat does this mean? We cannot deploy DNS COOKIES today without anticipating a negative impact on DNS query success, because of the substantial proportion of DNS servers lacking full EDNS compliance. Based on ISC\u0026rsquo;s research we anticipate the failures will go beyond losing the value of the DNS cookies application, to include breakage of existing, working DNSSEC support. Those domains that have taken care to protect their DNS with DNSSEC will be most adversely impacted if their EDNS support is not fully compliant with the specifications for handling unknown options.\nWhat can you do? The first thing you can do is test the support for EDNS in your own DNS system. ISC is hosting an easy self-test tool on the web that queries your site and gives you a report in seconds. If you see a problem, it could be due to either your network infrastructure, or your DNS server itself.\nSome firewall vendors and anti DoS vendors have default rules that see requests with any of these extension mechanism in use as a reason to drop the query defeating the intent of defining how to handle requests using these extension mechanisms. Timeouts are usually due to a misconfigured firewall though there is a chance that the DNS server. itself could be timing out. You may need to update or reconfigure your network infrastructure to allow EDNS to pass unobstructed.\nAll other errors (wrong rcodes, lack of OPT record, content incorrectly echoed) are problems with the DNS server. If your DNS server is deficient, contact your vendor for an upgrade. The EDNS protocol was standardized 16 years ago: full support should be available if you ask for it.\nReferences\nEDNS Compliance - presentation by Mark Andrews at IETF 92 in Dallas, March 24, 2015\nEDNS self-test tool - enter your zone name and find out how good your EDNS compliance is\nRFC 6891\nDNS Cookies current draft - pre-standard\n","permalink":"/blogs/partial-edns-compliance-hampers-deployment-of-new-dns-features/","tags":["DNS","EDNS","interoperability","standards"],"title":"Partial EDNS Compliance Hampers Deployment of New DNS Features"},{"categories":["BIND","DNS"],"contents":"Update, September 30, 2017: The DLV contents have been replaced with a signed empty zone. There is no longer any need or benefit to querying the DLV.\nThe ISC DLV Registry has been available since 2006, and ISC has been happy to provide the service. However, due to the great progress that native DNSSEC has made, we have decided that it is time to wind down the project. It has served its purpose well.\nIf you have a zone that can be properly validated to the root, please do that. It helps everyone. If you\u0026rsquo;re not sure if your zone can be validated, we recommend you try the DNSViz tool. If you have a zone already in DLV that could validate properly to the Root, we\u0026rsquo;d like you to remove it from DLV. For now, these are just requests.\nIn 2016, we plan to stop accepting zones that could validate to the root, and will remove from the DLV any zones that already do. We will remove all records from DLV in 2017, but leave the (empty) service running.\nWe thank everyone who has participated in this project, and encourage everyone to sign their zones and publish their zones with native DNSSEC!\nWe discussed our plans at ICANN in February in Singapore, at DNS-OARC and RIPE in May in Amsterdam, and at NANOG in San Francisco in June. The feedback from those groups was very positive in favor of moving ahead with this timeline. There are a few more details in the presentations linked below.\nICANN 52, Sunset for the DLV– Vicky Risk, Director of Marketing February 9, 2015 RIPE 70 – Sunset for the DLV– Jim Martin, Director of Operations May 13, 2015 We have directly notified those operating system distributors of BIND that we are aware of, as well as the Unbound team at NLNET Labs. If you know of other organizations we should notify, let us know at dlv@isc.org. ","permalink":"/blogs/dlv/","tags":["DNS","BIND","DLV","network"],"title":"Decommissioning the DLV"},{"categories":["BIND","DNS"],"contents":"Recursive DNS Servers administrators have for many years been advised to ensure that both the servers that they are running and the network environments wherein those servers reside are RFC-compliant. This is to ensure the best possible outcome when handling client queries.\nWhile some older DNS implementations and/or mis-configured servers still fail to adhere to current standards, there are two situations in particular where load-balancers, firewalls, routers and other network infrastructure devices sometimes fail to allow uninterrupted transit of DNS queries and query responses.\nDNS queries using TCP (best current practice for many years, and now clarified and asserted in RFC 5966: https://tools.ietf.org/html/rfc5966) DNS queries using UDP packet sizes greater than 512 bytes per EDNS0 (Extension Mechanisms for DNS 0: https://tools.ietf.org/html/rfc6891) BIND recursive servers operating in broken environments as detailed above will still be able to provide adequate answers to clients most of the time, although there will be some failures or resolution delays, particularly if neither TCP nor EDNS is possible. Clients of such servers will not be able to perform DNSSEC-validation locally.\nDNS servers should be able to make and respond to queries using both TCP and large UDP packet sizes (with EDNS)\nClients using DNS servers that themselves don\u0026rsquo;t fully support TCP or EDNS or which are running within environments that don\u0026rsquo;t permit transit of DNS traffic using TCP and EDNS will potentially have a degraded experience. There are a few BIND configuration workarounds that can be deployed to temporarily mitigate some of the impact of broken deployments like these, but the long term recommendation is that equipment that cannot be reconfigured to support modern DNS protocols should be replaced or upgraded.\nWhat does BIND do to determine how to communicate with other servers? When operating recursively, BIND 9 has a \u0026ldquo;fallback\u0026rdquo; process that it uses to test the capabilities of remote authoritative servers that it queries. When it \u0026ldquo;learns\u0026rdquo; what the path to those servers supports, it caches this information.\nBy default, all currently-supported versions of BIND request DNSSEC signatures when querying authoritative servers using TCP or UDP with EDNS. DNSSEC signatures cannot be requested using UDP without EDNS0.\nFor currently (and recently) supported versions of BIND up to and including BIND 9.9, the fallback algorithm for a \u0026ldquo;new\u0026rdquo; authoritative server operates as follows:\nQuery with EDNS, advertising size 4096, DO (DNSSEC OK) bit set If no response, retry with EDNS, size 512, DO bit set If no response, retry without EDNS (no DNSSEC, and buffer size maximum 512) If no response, retry the query over TCP BIND 9.10 uses a slightly different process of tries and retries for EDNS-capable servers to determine the maximum size of UDP responses that it should request from them, but similar logic applies to whether or not queries will be tried without EDNS, and when TCP should be used.\nTCP may be used earlier, not just as the final attempt BIND will resend a query over TCP immediately if it receives a response to that same query sent over UDP that has the TC (Truncated) bit set in the DNS response header. This mechanism is designed so that a server can flag to a client that it was unable to fit all of the necessary records into the maximum packet size that it was able to use for that response, therefore the client is receiving an incomplete answer. Although this is optional (per DNS standards), most clients receiving a response that is flagged as truncated will re-send the query using TCP in order to obtain the full response.\nWhat is different in BIND 9.9.6, 9.9.6-S1, 9.8.8, and 9.10.1 and how might it impact my Recursive DNS servers? In these new versions of BIND, we reworked the fallback process so that if a server had previously responded using EDNS, then we would flag it as EDNS-capable, and would not subsequently retry queries without EDNS.\nUnfortunately there is one rare scenario where this can cause new problems (although in that scenario, the server would already be experiencing other difficulties).\nThe requirements for there to be a new problem after upgrading are that all of the following are true:\nAn answer would be too large for a 512 UDP response packet when DNSSEC is requested but less than 512 without it. It is expected therefore that this new issue will only impact queries for records within DNSSEC-signed zones. No response is received back when sending queries with EDNS and a bufsize large enough to accommodate the authoritative server response. The same query using EDNS with bufsize 512 receives a response with the TC (Truncated) bit set in the header. All TCP queries to the server fail. The recursive server doesn\u0026rsquo;t perform DNSSEC-validation. (If the server had been performing DNSSEC validation before upgrading, it would already have been failing to resolve this name if the zone was properly signed with full delegation path in place, thus this would not be a \u0026ldquo;new\u0026rdquo; problem). In the situation above, because there was an answer that used EDNS with a buffer size of 512, the server is presumed to be EDNS-capable; therefore there will be no retry of the query without EDNS. Because TCP fails, and plain DNS is never attempted, it is impossible to obtain a non-truncated response from this server. The outcome will be that a SERVFAIL response is sent back to the requesting client.\nOlder versions of BIND would have retried the query without EDNS, and would have received a smaller, unsigned, non-truncated response from the server, which they would then have sent back to the requesting client instead of SERVFAIL.\nThe scenario above is a very unusual corner case that should only affect DNS server administrators whose network infrastructure is already badly broken.\nOK, I think I might have this problem, what should I do? Check your network infrastructure. Below are some of the problems that you might find:\nAll DNS queries and responses using TCP are being blocked/dropped.\nIf your network infrastructure is doing this, fix it now - it will be causing you other problems, not just this one. There are many authoritative servers today that respond to clients with the TC bit set for reasons other than response truncation, including those deploying Response Rate Limiting as part of their mitigation strategy for Denial of Service (DoS) attacks. All modern DNS recursive servers need to be able to use TCP when querying authoritative servers.\nAll DNS queries/responses using EDNS are being blocked/dropped.\nIf your network infrastructure is doing this, plan to fix it as soon as possible. Responses that are too large to fit in 512-byte UDP packets are increasingly common nowadays, even from zones that are not DNSSEC-signed. If your infrastructure doesn\u0026rsquo;t support DNS using EDNS then your servers will be falling back to TCP more often than necessary. As a short-term workaround (since you can\u0026rsquo;t use it anyway), you can disable the use of EDNS for all servers in your named.conf file:\nserver 0.0.0.0/0 { edns no; }; server ::/0 { edns no; }; DNS queries/responses using EDNS and/or TCP to a particular remote server are being blocked/dropped.\nIf the problem is with a remote authoritative server, a server statement in named.conf can be used as a temporary workaround: server { edns no; }; DNS queries/responses using EDNS are allowed, but UDP fragmentation and reassembly is broken.\nThis is often a difficult problem to detect and may not be possible to resolve immediately. What has usually happened is that somewhere along the path back to your server, a large DNS response has to be fragmented, but it then arrives in fragments at a destination device that doesn\u0026rsquo;t support UDP fragmentation and thus can\u0026rsquo;t be reassembled. If you already support DNS queries over TCP, you may not even have noticed that there is an issue. In the short term, you can speed-up the fall-back to TCP by advertising the maximum size you can receive using EDNS by using the global option edns-udp-size, or use the server options per the example below: server 0.0.0.0/0 { edns-udp-size 1432; }; server ::/0 { edns-udp-size 1232; }; Note that the settings suggested above are using values expected to work in most situations when UDP fragmentation is not supported. You may find that there are other factors at play in your network infrastructure that mean that you can use larger settings, or that instead you need to set smaller limits.\nIf you are unable to correct your network infrastructure or to use the configuration workarounds suggested above, patches are attached to this article for BIND 9.9.6-P1 and 9.10.1-P1. The patches add a final query without EDNS, as long as the resolver is not configured for DNSSEC validation.\nWhy was BIND\u0026rsquo;s behavior modified? One problem with the BIND fallback algorithm in prior versions of BIND 9 is that when BIND learns that a server doesn\u0026rsquo;t support EDNS or TCP, then it saves this information for next time it needs to query the same server.\nIn the case of intermittent network connectivity or packet losses, it can happen that a server that previously supported queries using EDNS, temporarily does not appear to respond properly unless queried without EDNS. When that happens, the earlier versions of BIND would flag the server as unable to support EDNS and would no longer use EDNS. Any queries thereafter sent to that server wouldn\u0026rsquo;t have the DO (DNSSEC OK) flag, so for DNSSEC-signed zones, the responses from that server would not include DNSSEC signatures. A DNSSEC-validating server receiving unsigned responses for a signed zone would regard them as spoofed, and answer with a SERVFAIL to the client. The domain would then effectively become unreachable for DNSSEC-validating clients.\nIn response to this problem, BIND changed the way it handles caching the capabilities of authoritative servers, so that if a server had previously demonstrated that it was EDNS-capable, BIND would store that information, and never query that server without EDNS.\nA future change to BIND will reintroduce the query without EDNS as a last-ditch effort to obtain an answer to a query, so long as the resolver is not configured for DNSSEC validation. Subsequent queries to the same authoritative server would always be tried first with EDNS.\nThis future change is not a bug fix The proposed future change to BIND is being made purely to accommodate broken server environments in the short term - administrators are strongly urged to fix their configurations and infrastructure. The process of retries adds delays into resolution of queries for clients. Where there are many servers authoritative for a domain, named will progress through all of them and in some situations may fail to complete its fallback to no-EDNS before the client query times out. Recursive servers that do not support TCP are broken and will be having a negative impact on the experience of their clients. Recursive servers that do not support EDNS are limiting their own performance. Recursive servers that support limited EDNS sizes but which have not been appropriately configured are also limiting their own performance.\nThis article is a reprint of an article previously published in ISC\u0026rsquo;s Knowledgebase.\n","permalink":"/blogs/refinements-to-edns-fallback-behavior-can-cause-different-outcomes-in-recursive-servers/","tags":["DNS","BIND","EDNS"],"title":"Refinements to EDNS fallback behavior can cause different outcomes in Recursive Servers"},{"categories":["ISC","Company News"],"contents":"ISC\u0026rsquo;s Public Benefit network services are: F-Root; SNS-PB, a subsidized anycasted DNS infrastructure for non-profits; Hosted@, subsidized hosting for non-profit projects at our Redwood City location; a municipal network connecting a number of local cities and non-profits to the Internet; and dlv.isc.org, a DNSSEC Look-Aside Validation service.\nNetwork Infrastructure We maintain approximately 2768 peering sessions across our infrastructure, more if you count all the nodes that peer through route servers. Maintaining these is a significant work item. In 2014 we worked nearly 200 tickets, adding and deleting peers.\nA number of service providers donate transit or hosting for F-Root nodes. In addition, we received two significant donations to our network infrastructure in 2014:\nTelia Sonera provided us with free 1G transit links this year both for F-Root and for general ISC network operations (in Atlanta, Chicago, Palo Alto, Frankfurt, Amsterdam). NTT has generously upgraded their donation of Internet transit in our main Palo Alto datacenter from 10G to 40G, and also offered us transit at any of our other sites that need it. F-Root Completed Projects\nWe audited all of our existing F-Root nodes that don\u0026rsquo;t have IPv6 connectivity. We contacted both the IX and transit partners and IPv6 enabled the nodes where possible. The nodes that got v6 in 2014 are Phnom Penh, Chennai, Kuala Lumpur, and Torino. At this time 40 of our 57 nodes are IPv6-enabled. We developed a design for a new F-Single node that dramatically reduces the cost for small sites. We solicited new node locations at AfriNIC in Mauritius in November, based on this new single-node design. We added new nodes in 2014 in Kuala Lumpur and Chennai, India. None of our public nodes were decommissioned in 2014. The total number of F-Root nodes went from 55 to 57 in 2014. We signed agreements to add nodes in Beijing, China and Luanda, Angola. We continue to work on new nodes being added in Uruguay, Angola, and Beijing. It can take several months to half a year to add a node, because for any given node, there are generally two transit providers: a hosting provider and one or more peers that need to be identified, established, configured and tested.\nF-Root maintenance\nHardware refreshes or upgrades in: Amsterdam, Chicago, Frankfurt, and New York. We would like to refresh each F-Root node approximately every 5 years. The frequency of refresh is of course really up to the node sponsor (because they purchase and own the equipment). Software rebuilt or upgraded in (at least): Brisbane, Paris, TelAviv, Warsaw. Transit provider changed or added more links or backup providers in (at least): Amsterdam, Chennai, Monterrey, Chicago, Franfurt, New York, Atlanta, Palo Alto F-Root Service Availability\nF-Root has not had an actual service outage in over 10 years. Because F-Root is anycasted, an individual node outage does not impair overall service availability, although users who were getting service from that node may see longer response times. Individual node outages can occur when someone makes a (mis) configuration change, or there is a hardware failure. Individual nodes regularly are degraded or taken out of service for a few months at a time due to localized problems (e.g. transit availability; in one case a POP was flooded), moves, or changes in sponsorship. There were 5 F-Root nodes that were degraded in 2014 for a significant period.\nWe monitored and addressed an attack on the root system in January. Global query loads doubled, but it made no real impact on the overall system.\nDITL Data\nWe participate in the DNS-OARC \u0026ldquo;A Day in the Life\u0026rdquo; (DITL) data collection project every year. This year we contributed 58,405,816 Mbytes of data, consisting of 5,586 million queries to F-Root. This was approximately 14% of queries to the root reported in the DITL data this year.\nSNS ISC operated both a commercial secondary name service and a subsidized public-benefit service in 2014. There are several dozen small ccTLDs published through our subsidized SNS service.\nAs every other network operations team does, we spend considerable resources on DDOS prevention and mitigation. We had to update systems across the board in response to the HeartBleed, NTP, and Poodle issues in 2014.\nEarly in 2014, our secondary name service was hit by TCP exhaustion attacks. We monitored, increasing the availability of TCP sockets as needed. Later, in July, we were hit with a 200+ gigabit DDOS against both the SNS Public Benefit servers in the San Francisco Bay Area, Chicago, and Amsterdam, as well as AS1280. The target was a ccTLD that we were hosting. We were able to work with both our friends in the community to locate the attack vector (a mixture of NTP and a flood of bad DNS packets from a well-known botnet), and then with our transit providers to filter out the attack traffic.\nThis attack made isc.org completely unreachable for an hour or so, and degraded our connectivity considerably for most of a day.\nAfter the massive DDOS, we realized that we are do not have the bandwidth or the scale to effectively protect against a modern flooding attack. Making the investment in equipment required to reach that scale is probably not consistent with our non-profit mission. So, we made the decision to recommend that our commercial SNS customers move to another (larger) provider. ISC is no longer accepting commercial SNS customers, although we will continue our subsidized public-benefit SNS service. We have secured additional bandwidth for our subsidized public-benefit SNS to help withstand future DDOS attacks.\nHosted@ We have historically provided free or subsidized hosting for non-profit projects. We did quite a bit of work here in 2014, resolving 234 issues for our hosted partners.\nThe Kernel.org team refreshed their equipment and donated their old servers to ISC. We are looking into putting them back under maintenance and using them as a VM farm for Hosted@ guests. The goal is to reduce the number of ancient power-hungry guest servers in our Redwood City data center. The electric bill is enormous!\nWe added NANOG backup servers to Hosted@, and will publish nanog.org via SNS-PB.\nOur Hosted@ users include: DNS-OARC, Public Library of Science, the FreeBSD Foundation, Public.Resource.org, the Network Time Foundation, the Measurement Lab (M-Lab), NetBSD, distributed.net, OpenDNS, The Linux Kernel Archives, OpenBSD, The Center for Applied Internet Data Analysis (CAIDA), the Bufferbloat Project, Creative Commons, the Free Software and Open Source Foundation for Africa, HTTP Archive, NANOG, SANOG, and more. In 2014 we removed and returned equipment to OpenLDAP.org, unixheads.com, and Bay Area Children’s Theatre, among others.\nNTP Amplification issue This was a big event in early 2014, and impacted a number of our hosted projects. We helped deflect a NTP amplification attack against one of our hosted customers. We also found that a number of our hosted customers were the source of NTP amplification attacks. We collected incoming abuse reports and contacted administrators for these hosted systems to make sure they locked down their NTP servers.\nMunicipal network In 2014 we upgraded our public-benefit municipal network service to 10G. The City of Menlo Park is now getting their Internet entirely through ISC, and we continue to work with the Palo Alto Unified School District to get them a 20x increase in bandwidth. ISC provided free or subsidized connectivity and transit for 11 local towns, schools, and non-profit organizations in 2014.\nDLV ISC operates the DNSSEC Lookaside Validation registry. This was developed as a transition mechanism to help people who wanted their DNSSEC-signed zones to be validated even though their parent had not yet adopted DNSSEC. There is an interface at dlv.isc.org that allows users to register their domain to be validated through the DLV.\nIn 2014 we modernized the DLV infrastructure, moving it to virtual machines at our Palo Alto datacenter, because it was running on very old hardware and an old OS. We also did an assessment and created a proposed plan for gradually decommissioning the DLV, which will be socialized throughout the DNS community in 2015. We currently have about 2,800 working zones that can be validated through the DLV.\nRIPE Atlas Anchor In 2014 ISC became one of the Anchor Probe sites for the RIPE NCC Atlas project.\nAS112 Server ISC operates one of the AS112 servers. The network of AS112 servers absorbs leaked advertisements for what are supposed to be private RFC1918 addresses, reducing the load on the rest of the DNS infrastructure.\nNetwork Engineering for IETF Meetings Jim Martin, our Director of Operations, has been the volunteer NOC Team Lead for the IETF for the past 15 years. Jim spends a substantial amount of his time in planning, establishing, and running the conference and hotel IP network for every IETF meeting. He and his team of volunteers create a rapid-deployment event network consisting of an extensive wireless deployment (\u0026gt;100 access points), multiple IPv4 and IPv6 BGP peerings over anything from a DS3 to a 100G experimental link, for about 1500 of the most demanding users on the planet \u0026hellip; the people that design the way the Internet works. In 2014 Jim established and worked in the NOC for the 89th IETF in London, the 90th IETF in Toronto, and the 91st IETF in Honolulu, and scouted future possible locations for the IETF.\nISC.org Website One notable event in 2014 (we can’t call it an accomplishment) was the malware infection on the ISC website. We took our website down between Christmas and New Year’s Day while we scrubbed all the files on our website, updated the software, and added malware detection. We could have restored the website more quickly, but we figured that everyone looking at it was on vacation, so we took our time. We have not had any reports that we actually infected anyone, but we deeply regret the whole situation.\n","permalink":"/blogs/notable-events-in-isc-network-operations-in-2014/","tags":["F-Root","network","operations"],"title":"ISC Network Operations Report for 2014"},{"categories":["Annual Reports","Company News"],"contents":"Most of our work at ISC falls into one of two major project categories: open source development and network services. We will review our 2014 accomplishments in network services in a separate post.\nIn 2014 we did a solid job of maintaining our primary open source projects, BIND 9 and ISC DHCP. We fixed more bugs in 2014 than were discovered or reported in 2014, even as we dedicated a lot of resources to addressing the resolver DDOS problem and maintaining our support for standards development.\nBIND 9 BIND is the industry reference implementation of the DNS protocols and a significant open source program at ISC. In 2014 we made the difficult decision to cancel work on BIND 10, and re-focus on BIND 9. We have continued maintenance of BIND 9 and added a new feature branch. In 2015 we hope to add more resources to the BIND 9 program, improve our test coverage, and bring out another new feature branch, 18 months after the previous one.\nMajor accomplishments in 2014\nReleased 9.8.7 \u0026amp; 9.8.8, 9.9.5 \u0026amp; 9.9.6, 9.10.0 \u0026amp; 9.10.1. Declared EOL for BIND 9.6 in January 2014 and for BIND 9.8 in December 2014. We added 24 new articles about BIND in our Knowledgebase (kb.isc.org) We posted an open BIND git repository We released the beta version of a new BIND DNSSEC Guide Maintenance\nRESOLVED 575 issues in 2014, (not counting those opened before 1/1/2011)*\nOPENED 557 new issues in our bug tracker\nWe made a special effort to review and accept more contributed patches. In 2014, we accepted, integrated, and released at least 35 contributed patches. Created special Windows releases for: 9.6, 9.8.7 \u0026amp; 9.9.5, fixing a bug that prevented dig and nslookup from exiting properly when run on MS Windows systems. Issued 4 –p sets of security releases. We made an average of 2.6 commits per DAY to the BIND master branch We use the Coverity open source scanning program extensively. BIND is showing an incredibly low defect density of 0.01, with 329,951 lines of code scanned. We added the Coverity badges that track the current status to our BIND and DHCP pages on the ISC website, so the information is readily available. Security\nThe Heartbleed vulnerability discovered in OpenSSL had a big impact on the IT community, but did not impact BIND specifically.\nWe issued 5 CVEs, 3 of which were specific to 9.10:\nCVE-2014-0591: A Crafted Query Against an NSEC3-signed Zone Can Crash BIND CVE-2014-3214: A Defect in Prefetch Can Cause Recursive Servers to Crash (Affects recursive servers running BIND 9.10 only.) CVE-2014-3859: BIND named can crash due to a defect in EDNS printing processing (Affects BIND 9.10 only) CVE-2014-8500: A Defect in Delegation Handling Can Be Exploited to Crash BIND (Affects recursive servers only.) CVE-2014-8680: Defects in GeoIP features can cause BIND to crash (Affects BIND 9.10 only) Codenomicon ran some packet fuzzing test runs on BIND 9.10 for us in the summer of 2014. They ran millions of test cases, and found a vulnerability in \u0026ldquo;dig\u0026rdquo; which, on closer inspection, revealed a \u0026ldquo;packet of death\u0026rdquo; vulnerability in BIND 9.10.0. ISC issued an operational advisory explaining how to build BIND with gcc 4.9 to avoid the problem.\nNew feature development\nLaunched BIND 9.10, with a new faster \u0026ldquo;map\u0026rdquo; format for zone files, pre-fetch, cookies, shared views, and new statistics formatting. Developed per-zone fetches, per-server fetches, and a hold-down timer for DDoS mitigation, which we trialed in our experimental service-provider branch Implemented negative trust anchor to ease deployment of DNSSEC validation, also available now in our premium subscription branch, coming in 2015 to the open source 9.11 release Implemented client-subnet-ID for authoritative service and began working on a design and project plan for full client-subnet-ID that will require external funding in 2015. Contributions to DNS standards\nISC engineers invest considerable time and effort working on proposals for Internet standards. Below is a list of documents in process that ISC staff are writing or co-authoring.\nRFC 7314 Extension Mechanisms for DNS (EDNS) EXPIRE Option: M. Andrews Domain Name System (DNS) Cookies: D. Eastlake, M. Andrews RFC 6598-6303 Add 100.64.0.0/10 prefixes to IPv4 Locally-Served DNS Zones Registry: M. Andrews DNSSEC Key Rollover Timing Considerations: S. Morris, J. Ihren, J. Dickinson, W. Mekking A DNS Record for Confidential Comments (expired draft): E. Hunt, D. Mahoney A Common Operational Problem in DNS Servers - Failure To Respond: M. Andrews RFC 7343 An IPv6 Prefix for Overlay Routable Cryptographic Hash Identifiers: J. Laganier, F. Dupont ISC DHCP ISC DHCP is distributed with most open-source operating systems and is incorporated into many commercial DDI/IPAM applications, as well as embedded devices. The software is mature and full-featured, but challenging to maintain. We are proud to have completed another year of aggressive maintenance, and to have released another feature branch. We added a new developer to the project in 2014. In 2015 we are hoping to taper off our work on ISC DHCP and focus more on Kea, the next-generation DHCP server from ISC.\nMajor accomplishments in 2014\nReleased 4.1-ESVR9 \u0026amp; -R10, 4.2.6 \u0026amp; 4.2.7, 4.3.0 \u0026amp; 4.3.1 Extended the date for End of Life for DHCP 4.1 another year (planned for December 2014, extended until December 2015) We created an open ISC DHCP git repository We accepted at least 11 contributed patches Added 12 new KB articles about ISC DHCP in our Knowledgebase (kb.isc.org) Maintenance\nRESOLVED 167 issues in 2014 (not counting those opened before 1/1/2009)**\nOPENED 133 new issues in our bug tracker\nWe use the Coverity free scanning program for open source programs, and starting in April, 2014, we made it a priority to address our outstanding Coverity errors in the DHCP project. Since then we have reversed the trend, and right now we have a Coverity defect density of 0.09, which is excellent.\nWe determined that our DHCP client script could be a vector for the Shellshock BASH vulnerability discovered in 2014. We communicated with the operating system packagers (who create these client scripts) about this possibility.\nNew Feature Development We launched ISC DHCP 4.3.0, which we called our IPv6 \u0026ldquo;uplift\u0026rdquo; release. This release added more feature support for IPv6, including access to relay options, on-expiry/on-renew features, and class support. It also added OMAPI subclass control, and implemented the newer standardized DHCID resource record format.\nIn addition to the 4.3.0 feature release, we added 12 minor features requested by users in releases 4.3.1 and 4.3.2, with selective backporting to earlier releases.\nKea DHCP Kea is our under-development next-generation DHCP server, intended to eventually replace the ISC DHCP server. Kea is a server only, and does not currently include a client or relay. Kea is intended to be more easily extended than ISC DHCP, and is designed for dynamic reconfiguration. We are encouraged by the interest in contributing to and deploying Kea that we have seen from the community in 2014. We plan to continue new feature development in 2015, making Kea suitable for datacenter or public wifi deployments.\nMajor accomplishments in 2014\nReleased Kea 0.9, which separated Kea from the BIND 10 framework, making it a working standalone application. We also removed the dependency on Python and Botan. Began working on Kea 0.91, which is being developed in the open at https://gitlab.isc.org/isc-projects/kea We established a set of Kea interest mailing lists, which you can sign up for on the ISC mailing lists page We continued our partnership with Gdansk University, holding a hackathon there, and proposing several masters and PhD thesis projects. We opened a site on GitHub to accept contributions. We have accepted patches from the following: RedHat, CapGemini, CERN, Facebook, and 2 universities, Gdansk University and Silesian University. ISC contributions to DHC standards development\nISC engineers invest considerable time and effort working on proposals for Internet standards. Among the more notable efforts in 2014 are the work on the DHCP proposal RFC3315bis, and the two DHCP privacy drafts. In addition to working on drafts, ISC Senior Software Engineer Tomasz Mrugalski co-chaired the IETF DHC working group in 2014.\nRFC 7227 Guidelines for Creating New DHCPv6 Options: D. Hankins, T. Mrugalski, M. Siodelski, S. Jiang, S. Krishnan RFC 7431 DHCPv4-over-DHCPv6 (DHCP 4o6) Transport: Q. Sun, Y. Cui, M. Siodelski, S. Krishnan, I. Farrer RFC 3315bis Dynamic Host Configuration Protocol for IPv6 (DHCPv6) bis: T. Mrugalski, M. Siodelski, B. Volz, A. Yourtchenko, M. Richardson, S. Jiang, T. Lemon Customizing DHCP Configuration on the Basis of Network Topology: T. Lemon, T. Mrugalski DHCPv6 Options for configuration of Softwire Address and Port Mapped Clients: T. Mrugalski, O. Troan, I. Farrer, S. Perreault, W. Dec, C. Bao, L. Yeh, X. Deng Privacy considerations for DHCP: S. Jiang, S. Krishnan, T. Mrugalski Privacy considerations for DHCPv6: S. Krishnan, T. Mrugalski, S. Jiang Major Changes to Projects We released version 1.2 of BIND 10, ended the BIND 10 development project at ISC, renamed the BIND 10 components as Bundy, and released control of the source to be managed by the Bundy project, which has put it up on GitHub. We jettisoned the DNS-Co branding which had come to symbolize aggressive commercialization. We wrapped up the Open Home Gateway Forum, funded by Comcast. We removed all restrictions on our Knowledgebase and on our duplicate git repositories for BIND and ISC DHCP, so these resources are all free and open to anyone. Previously we reserved some access for subscribers only. ISC 2014 conference presentations Apricot - DNS Response Rate Limiting presentation SELF (South East LinuxFest) - DNSSEC Signing Your Zones APNIC - 10 Years of F-Root UKNOF - Kea overview, Resolver DDOS Mitigation NANOG - How to Fund Open Source DNS-OARC - Case Preservation in BIND ICANN 50 \u0026amp; 51 - DNS Server panel discussion on tech day at both events, How to Fund Open Source AFRINIC - F-Root in Africa LISA - DNS Response Rate Limiting mini-tutorial In addition, we held 2 webinars and organized a meeting about DNS resolver DDOS mitigation measures at the 90th IETF in Toronto.\n* Date chosen to represent \u0026ldquo;current applicable issues.\u0026rdquo; We released BIND 9.9.0 in February, 2011.\n** Date chosen to represent \u0026ldquo;current applicable issues.\u0026rdquo; We released ISC DHCP 4.1.0 in December, 2008.\n","permalink":"/blogs/isc-retrospective-on-2014-open-source-work/","tags":["2014","accomplishments","bugs","open source"],"title":"ISC Retrospective on 2014 Open Source Work"},{"categories":["ISC","Company News"],"contents":"Beginning with the start of 2015, ISC is introducing a new PGP signing key which will be used to verify the authenticity of BIND and DHCP source downloaded from ISC. This replaces the current key, which is expiring.\nThe old key for codesign@isc.org, with key ID 45AC7857189CDBC5, was created in 2013 with an expiration date of 31 January 2015, a date that is fast approaching.\nIt is being replaced by a new key with key ID 6FA6EBC9911A4C02, and an expiration date of 31 January 2017.\nUntil the expiration of the 2013 key, ISC will sign code releases with both keys. This includes the development releases released today (BIND 9.9.7b1 and BIND 9.10.2b1.) You may therefore encounter a message from PGP or GPG when verifying your download if you do not have both keys in your keyring. You can disregard such messages as long as PGP or GPG confirms a valid signature with at least one of the keys.\nBoth keys are available from the ISC website:\nhttps://www.isc.org/pgpkey\nAnd if you need instructions on how to verify a download using PGP or GPG, a brief summary can be found in the ISC Knowledgebase:\nhttps://kb.isc.org/docs/aa-01225\nGiven the recent security incident with the ISC website, some will naturally ask whether the retirement of the old key was prompted by security concerns. The answer to that is no; we have no suspicion that the old key was compromised in any way. The key change is motivated solely by the January 31, 2015 expiration date that was set when the key was generated years ago. We are choosing this time to issue the replacement to allow an interim period during which people have time to retrieve the new key.\nSome parties may also have reservations about trusting a key downloaded from a site that was recently compromised. If you prefer, you can download the key from the public key server https://pgp.mit.edu.\nPlease take note that after 31 January 2015, new releases will no longer be signed using the expiring key (key id 45AC7857189CDBC5), so if you use PGP or GPG to check the integrity of your downloads you should import the new key before that occurs.\nMichael McNally, ISC Support\n","permalink":"/blogs/new-code-signing-key-for-2015-2017/","tags":["PGP"],"title":"New Code-Signing Key for 2015-2017"},{"categories":["BIND","DNS","Security"],"contents":"We have today posted updated versions of BIND 9.9.6 and 9.10.1 to address a significant security vulnerability in DNS resolution. The flaw was discovered by Florian Maury of ANSSI, and applies to any recursive resolver that does not support a limit on the number of recursions. [CERTFR-2014-AVI-512, https://www.cert.ssi.gouv.fr/avis/CERTFR-2014-AVI-512/], [USCERT VU#264212, https://www.kb.cert.org/vuls/id/264212/]\nA flaw in delegation handling could be exploited to put named into an infinite loop, in which each lookup of a name server triggered additional lookups of more name servers. This has been addressed by placing limits on the number of levels of recursion named will allow (default 7), and on the number of queries that it will send before terminating a recursive query (default 50). The recursion depth limit is configured via the max-recursion-depth option, and the query limit via the max-recursion-queries option. For more information, see the security advisory at [CVE-2014-8500] [RT #37580].\nIn addition, we have also corrected a potential security vulnerability in the GeoIP feature in the 9.10.1 release only. For more information on this issue, see the security advisory at [CVE-2014-8680].\nPlease review the release notes, which are available from ISC\u0026rsquo;s FTP server.\n","permalink":"/blogs/important-security-advisory-posted/","tags":["security","CVE"],"title":"Important Security Advisory Posted"},{"categories":["ISC","F-Root"],"contents":"ISC has operated F-Root, one of the world’s thirteen root name servers, since 1994. We have this service deployed around the world in 55 locations to offer fast, reliable access even in otherwise underserved parts of the world. We have well over a thousand peers. F-Root is supported with the help of multi-year donations in kind from many service providers and other Internet organizations. We partner with regional Internet registries, including LACNIC, APNIC, and AfriNIC, and many local sponsors to deploy F-Root all over the globe, with hosting services provided by Equinix and others, and transit bandwidth donated by NTT, Hurricane, Telia, Cogent, and AboveNet. Cisco, Juniper, and Arbor Networks have all donated hardware for F-Root too.\nAs the F-Root operator, we participate in technical policy discussions about the future of the Internet and its critical infrastructure. This week, we’ve been at the ICANN51 meetings in Los Angeles, where people are talking a lot about “accountability.”\nThat word “accountability” means different things to different people. For us and F-Root, it starts with realizing that it’s not enough to do an excellent job providing DNS service for the root to users throughout the world. We have to communicate with the broader Internet community, so people will know they can rely on us to continue providing the service, according to principles that put their interests first. We’ll keep the systems running. If something breaks, we’ll fix it. And as the Internet grows, so will our service. Our mission is to support the infrastructure of the Internet. We are committed to providing responsive, affordable DNS root service to underserved communities around the world, and are adding root nodes in those areas as we get funding or sponsorship for them.\nWe demonstrate these values in the information we make available about F-Root: where our servers are and how to talk to us about peering or adding an F-Root site of your own. Third-party services such as dnsmon also make it possible for anyone, anywhere to see how we’re doing.\nWe work with other root server operators and the broader Internet community to build on our shared commitments to each other and to our users. ISC was a founding member of the Root Server System Advisory Committee (RSSAC) in ICANN and the first to sign an agreement with ICANN regarding the operation of a root name server; you can see the document at http://archive.icann.org/en/froot/ICANN-ISC-MRA-26dec07.pdf. We’ve contributed to the recent restructuring of RSSAC for increased effectiveness and transparency, and to documents in progress to tell the community more about how the root servers operate and what we\u0026rsquo;re committed to providing. At ICANN51 and other events, we’ve met with other root server operators, ICANN staff, DNS operators, and government officials to get their input on services and information we could offer.\nWe also want to hear from you! What do you want to know about F-Root? Contact the team via our contact form or f-root@isc.org.\nJim Martin, Director of Operations\n","permalink":"/blogs/icann-51/","tags":["F-Root"],"title":"ICANN 51: Accountability for F-Root Operations"},{"categories":["DHCP","ISC DHCP","security"],"contents":"As most of our visitors are already aware, this week saw the disclosure of a very serious security flaw in the \u0026ldquo;Bourne-again Shell,\u0026rdquo; bash. (See: CVE-2014-6271, and CVE-2014-7169.)\nThe flaw allows remote execution of arbitrary commands by the shell if an attacker can cause data to be passed to the shell as the value of a shell environment variable.\nDespite reports to the contrary saying that a 2011 change (CVE-2011-0997) to dhclient prevents exploitation of this flaw, ISC has confirmed that the DHCP client provided as a part of ISC DHCP can be used to exploit the bash vulnerability if the operator of a rogue DHCP server passes a specially constructed value as the payload of a DHCP option field.\nFor this and many other reasons, all users running a vulnerable version of bash are advised to update to a secured version as quickly as possible.\nPostscript: Readers will naturally want to know whether other ISC products can be used to exploit this condition. We know of no vulnerability in the ISC DHCP server or in BIND 9 that can be used as a vector to exploit the bash flaw. We nevertheless strongly recommend that the best course of action is to upgrade to a secure version of bash due to the seriousness of this flaw.\n","permalink":"/blogs/bashbug/","tags":["bash","bugs","security"],"title":"ISC’s DHCP Client Can Be Used as a Delivery Vector for bash Bug"},{"categories":["BIND","DNS"],"contents":"Support for the CAA record was added to BIND with the 9.10.1B release, after Rick Andrews of Symantec approached us at an IETF meeting and asked why we didn\u0026rsquo;t have it already. Rick is an expert and evangelist for the use of certificates, so we invited Rick to explain why people should use CAA records.\nCertificate Authority Authorization (CAA) (RFC 6844) is intended to reduce the risk of unintended SSL/TLS certificate mis-issuance, either by malicious actors or by honest mistake. The goal is to allow a DNS domain name holder to specify the certificate authority or authorities that the owner has authorized to issue SSL/TLS certificates for that domain.\nFor example, if you own example.com, and wish to express your preference that certificates for that domain should only be issued by Primary CA, Inc., you would create a record in DNS indicating such. If a malicious actor, or an employee who is not aware of your preference, engages a different CA, Secondary CA, Inc. to purchase a certificate, Secondary CA might first check in DNS. If they see that you have a CAA record that does not specify Secondary CA as a preferred certificate authority, Secondary CA could alert you of that. You could then choose to deny the certificate purchase, or change or add to DNS your preference to allow Secondary CA to issue certificates for your domain.\nFor this reason, we recommend use of the CAA record.\nRick Andrews, Senior Technical Director for Trust Services, Symantec.com\n","permalink":"/blogs/certificate-authority-authorization-records/","tags":["BIND 9.10","certificates","Symantec"],"title":"Certificate Authority Authorization Records"},{"categories":["BIND","DNS"],"contents":"Our users value stability and security above anything else, when it comes to BIND. Every time we have to issue a security advisory we are inconveniencing thousands of network administrators. We also know they would rather be informed if there is some way to compromise or crash BIND. So, when we read that Codenomicon discovered the Heartbleed bug in OpenSSL, we decided to contact Codenomicon directly and learn if they would scan BIND as well. I was a bit worried that BIND might be next in the news, if we had a hidden serious defect; we decided to find out.\nISC takes full advantage of the Coverity Scan program for Open Source. We schedule our three major products (BIND, ISC DHCP, and Kea) for frequent analysis with Coverity, and try to stay on top of any issues that are flagged. We very much appreciate the test coverage we get from that. (You can see our Coverity status on the badges on the BIND and DHCP software pages on this website.)\nScanning with Codenomicon is complementary to Coverity. Codenomicon develops software that stress tests networking products by throwing unexpected or malformed packets at the product. It is a specialized form of negative testing that can be extremely successful at uncovering serious bugs that are latent in your software. It is less well-known in the open source community that Codenomicon also offers free scans to selected open source codebases through their Codenomicon Robust Open Source Software program (CROSS).\nWe wholeheartedly recommend the CROSS program to other open source developers. Our experience with the CROSS program has been completely positive. First of all, there was a minimum of fuss, delay, or administrative overhead. Within a couple of days after I contacted them, BIND was accepted into the program; a Codenomicon engineer downloaded the latest version of our source code, compiled it, and analyzed it to decide which protocol suites to run against it. They set up an account to monitor the progress of the scan. Ten days after I contacted them, BIND had already passed over 4 million test cases.\nUnfortunately, after about 10 days, they hit a failure. First, they found a crash in dig. When we analyzed this, we realized the same code was also in BIND 9. Then, they found the same crash again in BIND. By then, we were already working on a patch. We posted CVE-2014-3859 on June 11th, \u0026ldquo;A query specially crafted to exploit a defect in EDNS option processing can cause named to terminate with an assertion failure” and issued BIND version 9.10.0-P2.\nWhen I first contacted Codenomicon I was concerned, because of the publicity surrounding Heartbleed, that they might not notify ISC first if they found a defect, and allow us to manage the disclosure. I didn’t need to worry. Once the Codenomicon scan found a critical defect, they offered to help us with diagnosis, kept scanning, AND didn’t tell anyone else. What more could you ask for? We were able to follow an orderly phased disclosure process, issuing a security advisory and a patch, without any leaks or pressure from Codenomicon. Considering the PR value to them of publicizing a significant BIND vulnerability immediately following the Heartbleed bug, we were much relieved to learn their process and business culture is world-class. We highly recommend Codenomicon. If you are an open source developer, consider applying for the free CROSS program for open source.\n","permalink":"/blogs/open-source-fuzz-testing-from-codenomicon/","tags":["security","Codenomicon"],"title":"Codenomicon Testing for BIND"},{"categories":["BIND","DNS"],"contents":"This is the last of three blog posts introducing the new features in BIND 9.10. With BIND 9.10 we continue improving DNSSEC support. For the complete list of new features, see the BIND 9.10 Release Notes.\nDNSSEC Improvements PKCS#11 API for direct control of HSM. A new compile-time option (\u0026ldquo;configure \u0026ndash;enable-native-pkcs11\u0026rdquo;) allows the BIND 9 cryptography functions to use the PKCS#11 API natively, so that BIND can drive a cryptographic hardware service module directly instead of using a modified OpenSSL as an intermediary. This has been tested with the Thales nShield HSM, and with SoftHSMv2 from the Open DNSSEC project. (See the \u0026ldquo;Thales ISC Solution Brief\u0026rdquo; from Thales.) Information about this option and how to use it is in the BIND 9.10 ARM. We are planning a joint webinar with Thales on the topic in June, 2014. (Note that the Thales HSM product line is now (2024) apparently part of the Entrust product line. At the same time, the PKCS#11 interface has also changed. Please consult the BIND ARM for the latest information.)\nThree new options facilitate key management:\nThe new \u0026ldquo;dnssec-signzone -Q\u0026rdquo; option causes dnssec-signzone, when re-signing a zone, to drop signatures from keys that are still published but are no longer active. This makes it easier to roll DNSSEC keys according to the \u0026ldquo;pre-publish key rollover\u0026rdquo; method described in RFC 4641, section 4.2.1.1. The new \u0026ldquo;dnssec-importkey\u0026rdquo; command allows the use of offline DNSSEC keys with automatic DNSKEY management. This allows an inline signing zone to publish or unpublish DNSKEY records on schedule even if it doesn\u0026rsquo;t have access to the corresponding private key data (arguably a bug fix). The new \u0026ldquo;max-zone-ttl\u0026rdquo; option enforces maximum TTLs for zones. This can simplify the process of rolling DNSSEC keys by guaranteeing that cached signatures will have expired within the specified amount of time. Loading a zone with a higher TTL will fail. DDNS updates with higher TTLs are accepted but the TTL is truncated. (Note: Currently supported for master zones only; inline-signing slaves will be added.) Other Cryptographic Improvements The \u0026ldquo;rndc\u0026rdquo; command now supports new key algorithms in addition to HMAC-MD5, including HMAC-SHA1, -SHA224, -SHA256, -SHA384, and -SHA512. The -A option to rndc-confgen can be used to select the algorithm for the generated key. (The default is still HMAC-MD5; this may change in a future release.) Specifying the keyword \u0026ldquo;auto\u0026rdquo; instead of a salt when using \u0026ldquo;rndc signing nsec3param\u0026rdquo; will cause named to select a 64bit salt at random. Hard-to-Classify New Features Multiple DLZ databases can now be configured in the same server. Prior to release 9.10, BIND supported only one DLZ database; now you can have a DLZ module for redirect zones, while keeping the DLZ module you already have. Details of how to configure and use this expanded capability as part of an expansion of NXDOMAIN redirection can be found in NXDOMAIN Redirection Using DLZ in BIND 9.10 and later.\nRPZ now allows response policies to be configured based on the IP address of the client.\nThe Windows installer now places files in the Program Files area rather than system services. Several features previously unavailable on Windows are now available, including \u0026ldquo;delv\u0026rdquo; and the \u0026ldquo;export library\u0026rdquo; APIs including libirs. The Python tools \u0026ldquo;dnssec-coverage\u0026rdquo; and \u0026ldquo;dnssec-checds\u0026rdquo; can now be enabled on Windows via \u0026ldquo;Configure\u0026rdquo;, but are not included in the installation zip files. All versions of Visual Studio up to 2013 are now supported, and support has been added for 64-bit builds.\nSource Identity Token (SIT), similar to DNS Cookies (invented by Donald Eastlake and described in draft-eastlake-dnsext-cookies-04); these are designed to enable clients to detect off-path spoofed responses, and to enable servers to detect spoofed-source queries. Servers can be configured to send smaller responses to clients that have not identified themselves using a SIT option, reducing the effectiveness of amplification attacks. RRL processing has also been updated; clients proven to be legitimate via SIT are not subject to rate limiting. This feature is experimental in BIND 9.10 as the draft is updated.\nFinally, in addition to all the new features and performance enhancements, there are some changes in existing features that you need to know about.\nChanges to Existing Behavior The internal and export versions of the BIND libraries (libisc, libdns, etc) have been unified, so that when BIND 9 is built with shared libraries, other applications (e.g., ISC DHCP) can use those same libraries. Previously it was necessary to build two versions of the libraries, one for BIND 9 and another for external applications, but this is no longer the case. BIND 9.10 listens on IPv6 as well as IPv4 interfaces by default; it is no longer necessary to specify a \u0026ldquo;listen-on-v6\u0026rdquo; option. On operating systems that support routing sockets, including MacOSX, BSD and Linux, network interfaces are re-scanned automatically whenever they change. Use \u0026ldquo;automatic-interface-scan no:\u0026rdquo; to disable this feature. Use \u0026ldquo;rndc scan\u0026rdquo; to trigger an interface scan manually. Threads are now enabled by default on most operating systems, including Linux. Operators who were not previously using threads may see some changes in behavior. named now preserves the capitalization of names when responding to queries. For instance, a query for \u0026ldquo;example.com\u0026rdquo; may be answered with \u0026ldquo;example.COM\u0026rdquo; if the name was configured that way in the zone file. Some clients have a bug causing them to depend on the older behavior. Previously the case of the answer always matched the case of the query, rather than the case of the name configured in the DNS. Such clients can now be specified in the new \u0026ldquo;no-case-compress\u0026rdquo; ACL; this will restore the older behavior of named for those clients only. ","permalink":"/blogs/bind-9-10-dnssec-crypto-and-changes-to-existing-behavior/","tags":["BIND 9.10","DNSSEC"],"title":"BIND 9.10 – DNSSEC, Crypto and Changes to Existing Behavior"},{"categories":["BIND","DNS","security"],"contents":"This week several of our customers have contacted us to inquire about our reaction to this article, entitled \u0026ldquo;Critical Vulnerability in BIND Software Puts DNS Protocol Security at Risk.\u0026rdquo;\nISC would like to clarify that we evaluated the risk from this issue in 2013 when it was disclosed to us, and do not judge it to be a \u0026ldquo;critical vulnerability\u0026rdquo; or feel that it \u0026ldquo;puts DNS protocol security at risk.\u0026rdquo; The article linked above is light on details, but you can follow this link to the original presentation from Woot \u0026lsquo;13 if you would like more background information on the SRTT algorithm flaw that allows an attacker to influence selection of a specific nameserver from the servers available in the NS record RRSET.\nThe authors of that paper responsibly reported the issue to ISC prior to their conference presentation and we evaluated it for its security threat potential at that time. We reached the conclusion that the technique described did not by itself constitute an exploitable defect in BIND security, but that it did have potential for use as an enhancement for other attacks. In order to explain the matter and make operators aware of it, we issued an Operational Notification for BIND admins and announced it on public mailing lists in August 2013.\nRenewed interest in this matter has prompted us to re-examine the issue to see whether any new information has changed our opinion of the issue\u0026rsquo;s severity. At this time we still believe that the manipulation of server selection through exploitation of a flaw in the SRTT algorithm represents at best a supplement to other attack vectors. Nevertheless, ISC intends to correct the flaw in a future release of BIND but has not committed to a timetable for doing so.\nIf you are aware of an active exploit which uses this technique, or if you believe you are aware of an implication we may not have considered, we encourage you to share your concerns with our ISC Security Officers by e-mailing security-officer@isc.org. Please encrypt any communications containing sensitive security information using the Security Officer PGP key.\nThank you for the opportunity to clarify this matter.\nMichael McNally, ISC Support Engineer\n","permalink":"/blogs/isc-responds-to-questions-raised-about-srtt-algorithm-flaw/","tags":["BIND","SRTT"],"title":"ISC Responds to Questions Raised About SRTT Algorithm Flaw"},{"categories":["BIND","DNS"],"contents":"BIND 9.10 brings updates to statistics, troubleshooting tools, and some helpful utilities for zone configuration. The release notes are now posted alongside the software download, and we have created a section in the Knowledgebase for articles on the new features.\nStatistics Update XML statistics reported from BIND are refocused on \u0026ldquo;newer\u0026rdquo; format. BIND can provide statistics in either XML or JSON formats. JSON is significantly faster than XML, but is not supported yet on Windows. Previous versions of BIND had offered the option of XML statistics in older (v2) or newer (v3) format. The older version 2 statistics date back to BIND 9.6.0, introduced in 2008. BIND 9.10 offers only v3 format but offers it in the default version (without needing to custom-build BIND with statistics enabled).\nThe statistics channel now also includes many new statistics, including stats for the resolver, cache, address database, dispatch manager, and task manager, which can be used to monitor server health. New URLs have been added to the statistics channel to provide broken-out subgroups of statistics so as to reduce parsing complexity. The XSL stylesheet that enables interpretation of XML statistics can now be cached by the browser. New counters track TCP and UDP queries on a per-zone basis. This satisfies the new ICANN reporting requirement for new Generic Top Level Domains (GTLDs). See Operating statistics provided by BIND statistics channels for more information about the XML statistics channel and its usage. Below are a couple of examples of the new statistics.\nBIND Tools Release 9.10 previews the Domain Entity Lookup and Validation engine (DELV), a new DNSSEC troubleshooting tool intended to eventually obsolete dig+sigchase. See Eleven, twelve; dig and delv: BIND 9.10 for more information.\nThe dig tool now has EDNS client-subnet support and EDNS Expire support. \u0026ldquo;dig +subnet\u0026rdquo; sends an EDNS CLIENT-SUBNET option when querying. \u0026ldquo;dig +expire\u0026rdquo; sends an EDNS EXPIRE option when querying. When this option is sent with an SOA query to a server that supports it, it will report the expiry time of a slave zone.\nA new command makes it easier for others to help you troubleshoot your configuration.\nThe new \u0026ldquo;named-checkconf -px\u0026rdquo; option will print the contents of configuration files with shared secrets obscured. This makes it easier to share your server configuration \u0026ndash; for example, when reporting a bug \u0026ndash; without revealing private information. The \u0026ldquo;named-checkzone\u0026rdquo; and \u0026ldquo;named-compilezone\u0026rdquo; commands can now read journal files, allowing them to read the current state of a dynamic zone without freezing it and syncing its journal file first. The new \u0026ldquo;named-rrchecker\u0026rdquo; tool can be used to verify the syntactic correctness of individual resource records, or to convert them into a canonical format so that a newly defined record type can be loaded into an older name server that doesn\u0026rsquo;t recognize it. The new \u0026ldquo;rndc zonestatus\u0026rdquo; command reports information about a specified zone, including configuration details, last load time, serial number, and when the next automatic zone maintenance events are scheduled. ","permalink":"/blogs/bind-9-10-statistics-troubleshooting-and-zone-configuration/","tags":["BIND 9.10"],"title":"BIND 9.10 – Statistics, Troubleshooting and Zone Configuration"},{"categories":["BIND","DNS"],"contents":"We are proud to announce that today we posted a major new release of BIND. This new 9.10 branch will be the fourth simultaneous release train ISC is supporting, alongside 9.8, 9.9, and 9.9-subscriber. (In January 2014, ISC ended support for the 9.6 branch, launched in 2008, as previously announced.) We recommend that large system administrators run one of our older, stable branches, such as a 9.8- or 9.9-based release on their critical production systems, while testing the new 9.10 branch until the second or third maintenance release.\nThe release notes are now posted alongside the software download, and we have created a section in the Knowledgebase for articles on the new features.\nResponse Rate Limiting. One of the major features in 9.10 was actually first introduced in the 9.9 train in the 9.9.4 maintenance release. We violated our usual policy of limiting maintenance releases to bug fixes because this feature (Response Rate Limiting) was so important. If you are not using it yet on your authoritative servers, you should consider it now.\nRRL has proven to be so valuable and effective that it is now included in the default software-build configuration, which means that you can use RRL without having to configure and create a custom version of BIND. Instructions for using RRL are in Using the Response Rate Limiting Feature and (in somewhat more detail) in the BIND 9.10 ARM. We held a webinar \u0026ldquo;RRL \u0026ndash; Strategies for a Successful Deployment\u0026rdquo; in November, 2013, in which Eddy Winstead interviewed Peter Losher, our Senior Systems Engineer who deployed RRL for F-Root. Also note our earlier webinar on RRL.\nGeoIP support was also introduced in 9.9, through the subscription branch. With release 9.10 this feature is available to everyone. BIND 9 access control lists are used to give access to various server functions according to the IP address from which they were requested. BIND 9.10 is able to use data from MaxMind GeoIP databases to achieve restrictions based on the (presumed) geographic location of that address. The ACL itself is still address-based, but the GeoIP-based specification mechanisms can easily populate an ACL with addresses in a certain geographic location. This capability was derived from code contributed by Ken Brownfield. An interesting use of geographic ACLs is to offer different BIND views to clients in different geographic locations. See Using the GeoIP Features in BIND 9.10 for more information.\nOne of the major themes for BIND 9.10 is performance improvement. There are significant enhancements for both authoritative and recursive operations in many of the areas where we have heard requirements for better performance.\nPerformance Improvements New format option for zone files stored on disk allows substantially faster zone loading. Most of the time required for reloading BIND is consumed by parsing the zone files. With this new feature, zone files can be saved in a \u0026ldquo;pre-compiled\u0026rdquo; format. This feature applies to authoritative services, and specifically to \u0026ldquo;slaves.\u0026rdquo; Consult the BIND 9.10 Administrator\u0026rsquo;s Reference Manual (ARM) to learn when using map format would be a good idea, when it would be a bad idea, and the details of how to set up and use map-format zone files. See Using the \u0026lsquo;map\u0026rsquo; zone file format in BIND for a summary of appropriate and inappropriate usage of map-format zone files.\nDNS Pre-fetch can improve recursive resolver performance. DNS resource records that are received by a resolver are kept in its cache until they expire. BIND 9.10 now offers a \u0026ldquo;prefetch\u0026rdquo; option. When someone requests a record in the cache, BIND will serve that record, but also fetch a new copy, so it is fresh in the cache for the next requestor. This will improve the performance delivered to end users for resolving names that have short expiration times. See Early refresh of cache records (cache prefetch) in BIND for more information about this new \u0026ldquo;prefetch\u0026rdquo; option.\nBIND \u0026ldquo;views\u0026rdquo; can now share zone files, eliminating duplication of zone data for multiple views and saving memory.\nWe made substantial improvements in Response Policy Zone (RPZ) performance. See DNSRPZ performance and scaleability when using multiple RPZ zones for more information about this update and for a refresher on the RPZ mechanism and the impact of its use.\nEDNS processing better tracks remote server capabilities when handling recursive queries. Instead of sending larger packets and gradually decreasing packet size when it receives errors, now BIND 9 takes a more pessimistic approach and starts with small packets, graduating to larger sizes until it encounters errors. This should improve recursive performance when handling multiple authoritative servers and will also help in situations where connectivity is intermittent or limited by older or misconfigured in-path equipment.\nA new \u0026ldquo;large server tuning\u0026rdquo; option sets constants and default settings to values suited to large servers with abundant memory. This can improve performance on such servers, but will consume more memory and may degrade performance on smaller systems. In addition, adaptive mutex locks are now supported. This has been found to improve performance under load on systems that support them.\nTo read more about the new features in BIND 9.10, check the article New Features in BIND 9.10 or the release notes.\n","permalink":"/blogs/bind-9-10-a-new-branch/","tags":["BIND 9.10"],"title":"BIND 9.10 – A New Branch"},{"categories":["ISC","Company News"],"contents":"Internet Systems Consortium (ISC) today announced the release of version 1.2 of its BIND 10 software, and with that release announced that ISC has concluded its development work on BIND 10 and will no longer be updating the source pool.\nBIND 10 release 1.2 consists of an authoritative server, a control framework, an application interface, a statistics server, a logging framework, a remote control daemon, a configuration client tool, and numerous other tools for its development and operation.\n“BIND 10 is an excellent software system,” said Scott Mann, ISC’s Vice President of Engineering, “and a huge step forward in open-source infrastructure software. Unfortunately, we do not have the resources to continue development on both projects, and BIND 9 is much more widely used.”\n“The BIND 10 software is open-source,” Scott added, “so we are making it available for anyone who wants to continue its development. The source will be available from GitHub under the name Bundy, to mitigate the confusion between it and ISC’s BIND 9 (a completely separate system). The name ‘BIND’ is associated with ISC; we have changed its name as a reminder that ISC is no longer involved with the project.\u0026quot;\nBIND 10 release 1.2 is available from ISC at https://www.isc.org/othersoftware/.\nBIND 10 was a multi-year development project with numerous sponsors around the world. ISC is grateful for support received from Afilias, AFNIC, Association DNS.PT, Brazilian Network Information Center (NIC.BR), Canadian Internet Registry Authority (CIRA), China Internet Network Information Center (CNNIC), Comcast, CZ NIC z.s.p.o, DENIC eG, Google Inc., IIS, Japan Registry Services Co, Ltd. (JPRS), Nominet UK, New Zealand Registry Services (NZRS), Réseaux IP Européens Network Coordination Centre (RIPE NCC), Stichting Internet Domainregistratie Nederland (SIDN), Technical Center of Internet, and Uniforum SA. We expect the continuing development of Bundy to involve an equally diverse collection of developers and supporters.\nTo follow Bundy going forward, see http://bundy-dns.de.\nMedia Contact:\nVicky Risk\nvicky@isc.org\n+1 650 423 1457\n","permalink":"/blogs/isc-concludes-bind-10-development-with-release-1-2-project-renamed-bundy/","tags":["BIND 10"],"title":"ISC Concludes BIND 10 Development with Release 1.2"},{"categories":["BIND","DNS"],"contents":"THANK YOU As we prepare to release the next major version of BIND, we have to remind all our users that there is a sizeable community supporting BIND, outside of ISC. We depend on a committed community of beta testers, patch contributors, and OS packagers to help us improve our software and deliver it to end users. Once we release it, a cadre of experts in BIND and DNS helps anyone who posts to the bind10-users@lists.isc.org mailing list with configuration and troubleshooting advice.\nWe would like to thank everyone who alpha or beta tested the pre-releases, identified and reported bugs, contributed bug fixes, or suggested new features for this BIND release. While we can\u0026rsquo;t possibly name everyone who helped, we would especially like to recognize those people who contributed code or assistance in creating the new features in 9.10.\nContributors to 9.10 new features Pierre Beyssac Ken Brownfield John Eaglesham Tony Finch Wilmer van der Gaast Vadim Goncharov Timothe Litt Peter Palfrader Kevin Sheehan Tim Tessier Vernon Schryver Please accept our apologies if we have omitted anyone.\nWe also owe thanks to the members of the Internet Engineering Task Force, which drives the development of the DNS standards.\nBIND 9.10 adds support for these Internet Drafts Two new features in BIND were based on these two IETF drafts. The drafts themselves are not finalized and ISC\u0026rsquo;s implementation of them is not exactly as written in the draft.\nDNS Pre-fetch is based on:\nHighly Automated Method for Maintaining Expiring Records, by Warren Kumari, Roy Arends and Suzanne Woolf Source-Identity Tokens (SIT) is based on:\nDomain Name System (DNS) Cookies, by Donald Eastlake Support for these drafts was added in the DIG tool Client Subnet in DNS Requests, by C. Contavalli, W. van der Gaast, S. Leach, and E. Lewis EDNS EXPIRE OPTION, by Mark Andrews ","permalink":"/blogs/acknowledgements-for-bind-9-10/","tags":["BIND","contributor","IETF","patch"],"title":"Acknowledgements for BIND 9.10"},{"categories":["BIND","DNS","ISC DHCP","security"],"contents":"Earlier this week, the OpenSSL project announced CVE-2014-0160, disclosing a very serious security flaw in the OpenSSL library, affecting versions 1.0.1 and 1.0.2-beta (including OpenSSL 1.0.1f and 1.0.2-beta1). In many stories, this vulnerability is being referred to as the \u0026ldquo;Heartbleed\u0026rdquo; bug.\nBecause ISC products can be built to link against OpenSSL libraries, users of BIND 9 and ISC DHCP have asked us to clarify whether or not their systems are at risk due to CVE-2014-0160. Rather than answer questions individually, we hope that this will clarify the matter for our users and reassure them that their services are safe from this security vulnerability.\nIs BIND vulnerable? After consulting with our developers, we are pleased to report that BIND 9 does not make use of the vulnerable parts of the OpenSSL libraries, so BIND services are NOT at risk from CVE-2014-0160. Is ISC DHCP vulnerable? ISC DHCP does not use the affected parts of the OpenSSL library, either. ISC DHCP services are NOT at risk from CVE-2014-0160. What about Windows binary packages? For the benefit of Windows users, ISC provides installable binary distributions of BIND 9 for those who wish to run it on Windows servers. At the time of this message, the most recent Windows binary distributions include vulnerable versions of the OpenSSL shared libraries. These shared library files are safe for use with BIND 9 because BIND does not use the flawed parts of the library, but operators should not use the provided libraries with other applications. Future versions of the Windows binary distributions will include updated OpenSSL libraries with the security issues fixed, but we have no current plans to release emergency security releases for Windows because the libraries provided are safe for BIND 9. ","permalink":"/blogs/bind-dhcp-and-cve-2014-0160-the-openssl-heartbleed-bug/","tags":["BIND","CVE","DHCP","Heartbleed","security"],"title":"BIND, DHCP, and CVE-2014-0160 (the OpenSSL “Heartbleed” Bug)"},{"categories":["ISC","Company News"],"contents":"ISC founder Paul Vixie recently joined the elite ranks of membership in the Internet Hall of Fame. Each Hall of Fame member has made significant contributions to the advancement of the global Internet. Most of the work for which Dr. Vixie was cited was accomplished while he was at ISC, and ISC was definitely guided and shaped by his vision.\nNamed to the Hall of Fame as an Internet Innovator, Dr. Vixie\u0026rsquo;s extensive technical contributions include DNS protocol design and procedure, mechanisms to achieve operational robustness of DNS implementations, and significant contributions to open source software principles and methodology. He also created and launched the first successful commercial anti-spam service. His relentless innovation and leadership in Internet infrastructure has unquestionably made the Internet a better, faster, and safer place.\nDr. Vixie joins two members of ISC\u0026rsquo;s Board of Directors in the Internet Hall of Fame: Dr. David Farber and Dr. Stephen Wolff.\nPaul Vixie left ISC in 2013 to found Farsight Security, where he is its Chairman and CEO.\n","permalink":"/blogs/isc-founder-paul-vixie-inducted-into-the-2014-internet-hall-of-fame/","tags":["Internet Hall of Fame"],"title":"ISC Founder Paul Vixie Inducted Into the Internet Hall of Fame"},{"categories":["DHCP","ISC DHCP"],"contents":"ISC DHCP 4.3.0 is now online!\nAs mentioned in the previous blog by Shawn Routhier, the theme behind the release of DHCP 4.3.0 is “IPv6 uplift.” Certain features that were specifically designed for DHCPv4 are now available for DHCPv6, making network management of an IPv6 network more manageable and simple. As Shawn stated \u0026ldquo;[we] tried use the same commands or similar commands so that moving from a DHCPv4 feature to DHCPv6 is as painless as possible.\u0026rdquo;\nInformation on DHCP 4.3.0 We have published multiple sources of information to get you started with DHCP 4.3.0, including:\nKnowledgebase articles: https://kb.isc.org\nYouTube video:\nDownload ISC DHCP 4.3.0 The tarball and release notes for this release can be found on our Downloads page.\nISC’s Release Signing Key can be obtained at our OpenPGP Key page.\nDo you have any comments, suggestions, or ideas? Feel free to share them via email to marketing at isc dot org!\nAdib Behjat\n","permalink":"/blogs/isc-dhcp-4-3-0-is-live/","tags":["DHCP","DHCP 4.3.0","IPv6","IPv6 Uplift"],"title":"ISC DHCP 4.3.0 is Live!"},{"categories":["BIND"],"contents":"Internet Systems Consortium (ISC) is pleased to announce the completion of a multi-year software development program funded by JPRS and other key sponsors. The next-generation nameserver software framework is available as the BIND 10 distribution, which has been well received by sponsors and the open source community.\n“We are proud to have been able to work closely with our sponsors to deliver quality nameserver software as freely available open source,” stated Jeff Osborn, ISC Executive Director. “It also marks the conclusion of a multi-year development program funded by our generous sponsors. We look forward to continuing development on the BIND 10 suite with continued guidance from our broad BIND and ISC DHCP user base.”\nThroughout this development effort, JPRS and other key sponsors have provided development staff as well as funding. The collaborative effort provided valuable insights to all involved. The result of the initial development program of BIND 10 is a production release of authoritative nameserver software.\nDownloads of the BIND 10 authoritative nameserver software from the ISC website continue to increase. During the past few months, BIND 10 has accounted for more than 12% of the total downloads from the isc.org website. Feedback from those users will ensure that development continues in the right direction as the software becomes widely tested and later deployed in production.\nISC has been privileged to have support from numerous sponsors during the BIND 10 development effort.\nPlease visit ftp://ftp.isc.org/isc/bind10/1.1.0/bind10-1.1.0.tar.gz to download the software.\nAbout ISC Internet Systems Consortium (ISC) is a 501(c)3 public benefit corporation widely known for world‐class Internet software engineering and network operations. Founded in 1994 under an initial grant from UUNET, ISC is governed today by a 5-member Board of Directors. ISC software, of which BIND and ISC DHCP are the two best‐known examples, is open source. Our passion is Internet core technology. Our widely‐imitated Managed Open Source process ensures the quality of our software while keeping it completely open and available. ISC operates high‐reliability global networks of DNS root servers (F‐Root) and authoritative DNS servers both for non‐profit and commercial enterprises. ISC is actively involved in Internet protocol and standards development, particularly in the areas of DNSSEC and IPv6. ISC is supported by donations from generous sponsors, and by program membership fees.\nAdib Behjat\n","permalink":"/blogs/completion-of-bind-10-initial-development-program-delivers-basis-of-isc-next-generation-nameserver-software/","tags":["BIND 10"],"title":"Completion of BIND 10 Initial Development Program Delivers Basis of ISC Next-Generation Nameserver Software"},{"categories":["ISC"],"contents":"Following a successful public demonstration at the 88th Internet Engineering Task Force (IETF) meeting November 2013, ISC’s Open Home Gateway Forum team has submitted its code for package signing and validation to the OpenWrt project. ISC is now disbanding the Open Home Gateway Forum since this code has been completed.\nComcast and ISC created the Open Home Gateway Forum (OHGF) to address problems in managing large numbers of home gateway devices. The Open Home Gateway Forum (OHGF) team created and demonstrated enhancements to OpenWrt that enabled a robust process for remote configuration and software upgrade. This could allow an ISP to securely update home gateway devices, and might be leveraged more broadly by OpenWrt to provide automatic software updates. This is important since so much customer premises equipment never receives a software update, creating significant future security vulnerabilities. This new feature can mitigate the risk of software never being updated, which has recently been in the news in an article by Bruce Schneier and in a malware attack affecting Internet devices.\nThe remote configuration and update process uses cryptographic signatures and checksum verification to ensure the integrity of the update source as well as the validity of the software downloaded. These extra security measures protect the home gateway from downloading or installing software delivered by or tampered with by any unauthorized party.\nAs the largest cable Internet Provider in North America, Comcast has a vested interest in solving this problem. One way they have chosen to address it by funding the development of a transparent open source solution.\nOpenWrt is a highly extensible GNU/Linux distribution that runs on over 80 different manufacturer’s embedded systems. OpenWrt is an open source project, meaning that the source code is freely available, and they welcome contributions from contributors anywhere in the world. Many of the millions of home gateways connected to the Internet run software based on OpenWrt. Improvements made in OpenWrt can have a wide-ranging impact.\n“We are part of a community working together to address a global Internet problem,” stated Gregers Petersen, OpenWrt Relationship Manager. “We welcome the support of, and collaboration with, ISC and Comcast. The open source model is once again proving to be the best way to tackle the really tough technical problems that affect us all.”\n“OpenWrt is a vibrant open source project and their software is used by many of our Xfinity Internet customers as well as countless other Internet users,” said Jason Livingood, Vice President of Internet \u0026amp; Communications Engineering at Comcast. “We are happy that ISC was able to develop a unique way of secure remote configuration and update, since regular and automatic software updates are a critical part of ensuring the ongoing security of Internet-connected consumer devices.”\nAfter working with the OHGF team, OpenWrt is continuing to add support for detecting and recovering from interrupted installs and is working on a way to safely do home gateway kernel upgrades. For more information about OpenWrt, or to contribute or download the software distribution, see http://www.openwrt.org.\n","permalink":"/blogs/isc-wraps-up-open-home-gateway-forum/","tags":["OHGF"],"title":"ISC Wraps Up Open Home Gateway Forum"},{"categories":["Security"],"contents":"The first time someone writes (or contributes to, so this does not apply only to first systems) an application using low-level cryptography, (s)he has a choice among two bad and two very bad solutions. This blog explains the reasons of this recurrent disaster and tries to give some suggestions to avoid it\u0026hellip;\nThe worst possible solution is of course to build a new crypto implementation from scratch. Not only is it a very long task, but there are a lot of critical details which are not easily managed by newcomers; for instance, when generating RSA keys, it is not enough to get two prime numbers of the right size (e.g., if you get two consecutive primes, and there are an infinite number of such pairs, the two primes can be incredibly easily recovered from their product). Note this is specified in the relevant NIST document (and similar documents); it is just not enough to have a big integer library to build a good RSA implementation.\nThe second worst solution is to use a bad crypto library. Note here I am a bit high church about what I call bad; according to my personal criteria a crypto implementation is bad when it was not designed for a strong security usage. For instance:\nThere is no reasonable cryptographic random generator. There is no FIPS 140-2 certified version of the code (e.g., Botan is bad). I have mixed feelings about implementations which only claim to be FIPS 140-2 \u0026ldquo;ready,\u0026rdquo; as the reason they are not a candidate to certification is not always explicit/clear. So the choice is between a solution which supports crypto hardware (i.e., PKCS#11) or a solution which works well in software (i.e., OpenSSL):\nThe software/OpenSSL way (note I refer to OpenSSL because any alternatives in the open source world are very likely based on OpenSSL): It is not so bad. OpenSSL is aggressively optimised for the common cases; heavily used algorithms (i.e., cryptographic protocols in the cryptographers\u0026rsquo; terminology) on current platforms are written in assembly, security bugs are fixed as soon as they are known, and it covers almost everything one can need. I have more concerns about the not-crypto part, in particular the ASN.1 (lack of) parser or delay in years for not-crypto bug fixes. But the real problem with OpenSSL and similar software solution is the support of crypto hardware; PKCS#11 engines are buggy and a nightmare to both debug and use. So this solution becomes less good when crypto hardware is available and bad when crypto hardware must be used; there is a large community against the use of pure software for security core, with the idea that a software cannot really be protected. For instance, this argument constrains FIPS 140-2 certified software to level 1 (over 4) certification. The hardware/PKCS#11 way (here it is simpler; the only hardware independent generic API is PKCS#11): The idea is to use the PKCS#11 application programming interface directly. This raises two real world issues: first, all PKCS#11 providers (the piece of software between the application and the hardware providing the PKCS#11 API on the application side) implement only a part of the whole PKCS#11 specification, so it is easy to fall into something required by an application which is not supported by a particular Hardware Security Module. Second, when you have no HSM, you need a software one but they were written to help debug the PKCS#11 application, not for security by themselves; it is not the best idea to add a layer of software in the security (and sometimes performance-critical) path. What to do now? The software solution is a dead end unless one is sure no HSM will ever be used. There are at least 3 reasons to get crypto hardware someday: first, hardware is considered to be intrinsically more secure, so it will be required in some environments; second, there are situations where hardware is better, for instance random number generators (by definition a software random generator is a pseudo-random one) and key store protection; and finally, in some usages (and without a security risk analysis to support it) one believes a HSM is an essential part of the security.\nSo the right thing is to begin with PKCS#11. This adds some constraints (unique initialisation, sessions, separated sign/verify contexts, etc.) but IMHO most of these constraints could lead to better code. For instance, I believe a unified sign/verify context (vs. different context types for sign or verify) is a bad design; it ignores the difference between a public and a private key. The next step is to make the interface more generic so one can plug any crypto provider, PKCS#11, or software library (any library as soon as a good one is supported). A way to do this is to squeeze the PKCS#11 handling in the SoftHSMv2 implementation, so you finish with a code working with PKCS#11, or the Botan and OpenSSL backends of SoftHSMv2. Another benefit is that the code can be improved to accept a FIPS 140-2 certified crypto following the required Guidelines, so it can claim to use an embedded FIPS 140-2-validated cryptographic module running per FIPS 140-2 Implementation Guidance section X.Y guidelines.\n","permalink":"/blogs/the-crypto-library-disaster/","tags":["RSA","crypto","OpenSSL","PKCS","security"],"title":"The Crypto Library Disaster"},{"categories":["DHCP","ISC DHCP"],"contents":"We have now released ISC DHCP 4.3.0a1. The theme for this release is \u0026ldquo;v6 uplift.\u0026rdquo; We are upgrading some of the features you\u0026rsquo;ve used for DHCPv4 to make them available for DHCPv6. In general we\u0026rsquo;ve tried to use the same or similar commands so that moving from a DHCPv4 feature to DHCPv6 is as painless as possible.\nOver time we plan to add several articles to our Knowledgebase to describe some of the new features and how you might configure them. You can also review them by watching a webinar we presented on the new features: ISC YouTube - ISC DHCP 4.3.\nAs always, we\u0026rsquo;d appreciate anyone who can test out the new (and old) features and report any problems back to us. Please remember this is an alpha release - while we have done our own testing, the software is not suitable for production purposes quite yet. If all goes well we should be releasing a beta version in early January.\nThe tarball and release notes for this release can be found on our FTP site: ftp://ftp.isc.org/isc/dhcp/.\nISC\u0026rsquo;s Release Signing Key can be obtained at https://www.isc.org/pgpkey/.\nShawn Routhier\n","permalink":"/blogs/isc-dhcp-4-3-0a1-now-available/","tags":["DHCP","ISC DHCP"],"title":"ISC DHCP 4.3.0a1 Now Available"},{"categories":["DHCP","ISC DHCP"],"contents":"On November 29th ISC’s DHCP4 will turn 18.\nThe first code for the DHCP project was committed to the source repository by Ted Lemon on 1995-11-29. Over the years, our contributors have committed over 9,500 changes, covering everything from small bug fixes to major enhancements. Today, our DHCP code supports clients, relays, and servers for both DHCPv4 and DHCPv6.\nLooking forward, our next feature release, DHCP 4.3.0, will be coming out soon. Our focus is DHCPv6 uplift: we are enhancing the DHCPv6 code by adding popular DHCPv4 features not currently available in DHCPv6.\nISC is also moving forward with a new generation of DHCP code. Codenamed Kea, this project is a complete re-write of the DHCP code base to bring it more in line with current hardware and software capabilities.\nOn a personal note, as the current maintainer of ISC’s DHCP4 code base, I’d like to extend a big thank you to all of the previous maintainers and committers that have helped make the code what it is. The following people are listed in the repository log as having made contributions to ISC DHCP over the past eighteen years:\nMark Andrews, James Brister, Ben Cottrell, Francis Dupont, Michael Graff, David Hankins, Evan Hunt, Shane Kerr, Ted Lemon, Stephen Morris, Tomasz Mrugalski, Murray, Damien Neil, Jeremy Reed, and Paul Selkirk.\nI’d also like to thank all the people that have and continue to use DHCP. It is wonderful to know that code I work on helps people connect to the Internet every day.\nRegards,\nShawn Routhier\n","permalink":"/blogs/happy-birthday-to-iscs-dhcp4/","tags":["DHCP","ISC DHCP","DHCPv6","IPv6"],"title":"Happy Birthday to ISC’s DHCP4"},{"categories":["BIND","DNS","Security"],"contents":"Introduction BIND 9 has an image problem.\nIn fact, BIND of any version has had an image problem since at least the mid-1990\u0026rsquo;s, when the Internet stopped being a military/educational playground and turned commercial. That\u0026rsquo;s when security became an issue in a lot of places where people didn\u0026rsquo;t have to worry about it in the past. That\u0026rsquo;s when BIND started getting attacked.\nBIND 9 was in a lot of ways a direct response to security problems of BIND 4 and BIND 8. And in fact it has done a pretty good job of providing a full-featured and secure DNS server for more than a decade.\nStill, system administrators have long memories. I still know people who type \u0026ldquo;sync;sync\u0026rdquo; because at one point in their career someone told them that it was useful (it may have even been useful 10 years ago). In the \u0026ldquo;fool me once, shame on you, fool me\u0026hellip; you can\u0026rsquo;t get fooled again\u0026rdquo; world of system administration, it is hard for even a product with a very good security record to change perception.\nI do not expect to do that with a single article, but I would like to present a bit about how we handle security at ISC, and try to shed some light on the actual story.\nSoftware and Security Sadly, all non-trivial software has bugs. And all defects are potential security issues, since a bug is the software acting in a way that the developer did not expect.\nThere are several approaches to dealing with this reality:\nReducing the number of defects in the software Improving the way software reacts to defects Using a well-defined and managed process to handle vulnerabilities ISC uses all three approaches with BIND 9.\nAs far as reducing defect count, we\u0026rsquo;ve always had a reasonable software review methodology. In the past few years we\u0026rsquo;ve adopted unit testing methodology and have recently started using commercial tools to verify protocol compliance and test systems under heavy load. We now have a QA team and are expanding its role.\nThe manner in which BIND 9 reacts to software bugs is to terminate. While unpleasant for administrators, the idea is to avoid the system running in an invalid state and causing more damage. In BIND 10, we\u0026rsquo;ve adopted practices and an architecture to minimize this as well.\nI think that the process that ISC uses to handle reported and self-discovered vulnerabilities is built on a solid foundation. Let\u0026rsquo;s delve into it.\nISC\u0026rsquo;s Security Process ISC has a published security process, and we take our position as the vendor supporting the most widely-used DNS server very seriously. All security problems are disclosed publicly, along with a detailed description of the problem and a fix, in a timely fashion. Our software engineers, support engineers, and management are experienced and comfortable with this process, and work together to deal with each report as they arrive.\nOur process has been significantly refined (one could say that it is now more \u0026ldquo;formal\u0026rdquo;) over the past several years. We want to ensure that everyone knows how to evaluate problems, and that we know what to do based on the results of those evaluations. The time to invent - or change - a security process is not during a security incident, especially when feelings are running high. That has happened in the past; mistakes were made, and we make every effort to avoid similar situations now.\nHaving said that, evaluating a reported security problem will always be more of an art than a science. Some cases are clear - things like crash bugs - but others more complex - like problems that require special zone files to be served authoritatively, for example. This is where ISC\u0026rsquo;s goal of improving the Internet helps guide our evaluation process.\nReview of BIND 9\u0026rsquo;s Recent Security Record Let\u0026rsquo;s look at BIND 9\u0026rsquo;s recent security history, and perhaps examine similar data for some other open source implementations.\nThe best place to start is the BIND 9 Security Vulnerability Matrix, also available on the ISC Knowledgebase.\nWow. That\u0026rsquo;s a lot to take in!\nI\u0026rsquo;ve taken the liberty of following the various links and extracting a few things that I think are relevant to the discussion of BIND 9\u0026rsquo;s security record.\nBIND Vulnerability ID Date When Published Version Introduced Date When Introduced Time Until Discovery 55 2013, Jul 26 9.7 2010, Feb 16 3.5 Years 54 2013, Jun 04 9.9.3 2013, May 28 6 Days 53 2013, Mar 24 9.7 2010, Feb 16 3 Years 52 2013, Jan 24 9.8 2011, Mar 01 22 Months \u0026ldquo;When Published\u0026rdquo; is usually pretty close to the time incidents were reported to ISC. Based on our security policy, if a bug is \u0026ldquo;in the wild\u0026rdquo; then we publish a fix ASAP - usually within a day or two. Otherwise, we have a phased disclosure process where we have a little more time to be sure that the fix is both correct and complete, and where we notify high-risk targets and other trusted organizations a few days in advance so that they are not vulnerable when the problem is made public.\nIn 2013, BIND 9 has had 4 vulnerabilities reported so far. Some observations:\nOne problem, number 52, is actually unlikely to have impacted any real-world users, but since it was possible to configure our software in a way that was vulnerable, we treated it with the same severity as other problems. Our policy demands that we do so, and we think it is better for operators, even if it may increase our vulnerability count. We introduced one security problem that we know of in 2013, which was caught within a week. The other problems were all introduced several years ago. Okay, let\u0026rsquo;s look back at last year, 2012:\nBIND Vulnerability ID Date When Published Version Introduced Date When Introduced Time Until Discovery 51 2012-12-04 9.8 2011-03-01 21 months 50 2012-10-09 9.2 2001-11-25 11 years 49 2012-09-12 9.0 2000-09-16 13 years 48 2012-07-24 9.9 2012-03-13 5 months 47 2012-07-24 9.4 2007-02-23 5 years 46 2012-06-04 9.0 2000-09-16 13 years In 2012 BIND 9 had 6 vulnerabilities, although we actually only released 5 patches since 47 and 48 were handled with the same release.\nSome observations:\nOne problem, 48, is actually not a crash bug, information leak, or the like. However, based on our security assessment, it reached the level where our policy requires us to publish it as a vulnerability. Three of the problems were introduced in the software more than 10 years ago. We introduced 1 security problem that we know of in 2012, caught after 5 months. What does it mean? We have introduced security problems into BIND 9 in the past couple years. We need to get better. On the other hand, most of the defects were introduced long ago - often discovered by researchers or professionals actively probing for problems. The fact that they were undiscovered for so long indicates that they were fairly obscure, and the fact that they were eventually discovered hints at the popularity of BIND 9 and its corresponding value as a target.\nReview of Operating System Bugs It is difficult to defend ISC\u0026rsquo;s security record without comparing it to other software, but at ISC we consider it bad form to publish direct comparisons of our software with other products. However, I will note that all open source software has some number of reported security vulnerabilities - even djbdns, whose main claim to fame is being secure.\nRather than delving into other DNS software, let\u0026rsquo;s look at something different - something that everyone reading this will be familiar with: operating systems.\nTo make our comparisons, I\u0026rsquo;ve drawn from CVE Details, a great online database of reported vulnerabilities.\nUsing these data, we can make a quick table of vulnerabilities and compare some popular (and not-so-popular) operating systems:\nThese data can be interpreted in several ways, depending in large part on both your emotional attachment to any particular product and your feelings about security; how important is functionality versus security, how much risk are you willing to take for performance, and so on.\nFor example, one could point out that OS X has significantly fewer vulnerabilities than Windows 7. On the other hand, one could argue that OS X is based on FreeBSD, and has several times as many bugs as that operating system.\nI think an important takeaway is that systems with many users tends to have more vulnerabilities discovered. This is probably due to two factors:\nMore people are looking for vulnerabilities in systems with lots of users, as these are more interesting targets. Users like feature-rich and complicated systems, which are likely to have bugs. And any bug is a potential security vulnerability, right? Claiming that one system is secure and another one is insecure is only meaningful for your specific situation. Like benchmarking or interface design, security is both complicated and personal.\nConclusions We cannot say that BIND 9 has a perfect security record, but BIND 9 is a good piece of software that can be run safely.\nISC takes the security of all of our software very seriously. We feel a deep sense of responsibility to all of our users, and to the Internet community at large. We will continue our efforts, and provide an even more secure BIND 9 in the future!\nShane Kerr\n","permalink":"/blogs/bind9-security-record/","tags":["BIND","DNS","CVE","vulnerability","security"],"title":"BIND 9’s Security Record"},{"categories":["ISC","Company News"],"contents":"THIS BLOG IS HISTORICAL. ISC is no longer doing business as DNSco.\nInternet Systems Consortium’s (ISC) Board of Directors named Jeff Osborn as Executive Director with the explicit goal of ensuring production-quality, fully featured nameserver software is available as open source, with no financial barriers to its use. Osborn will also serve as President of DNSco, ISC’s commercial subsidiary.\n“I am both thrilled and humbled to be trusted with the task of leading ISC,” said Osborn. “I am proud to contribute to ISC’s storied history of commitment to the global Internet community, and I look forward to working closely with our many and varied stakeholders. DNSco also has a key role to play as the primary funding vehicle for our public benefit work, and has an opportunity to address the growing threats to the DNS from a business support perspective.”\nISC and DNSco continue to develop, maintain, and distribute BIND and ISC DHCP as open source software. ISC makes essential infrastructure software available; DNSco offers professional support subscriptions for ground-level business continuity planning. DNSco is a commercial business tasked with generating revenue to sustain both itself and its parent company.\nFind Jeff Osborn on LinkedIn at https://www.linkedin.com/in/jeff-osborn-880651/.\n","permalink":"/blogs/jeff-osborn-named-president/","tags":["ISC"],"title":"Internet Systems Consortium Renews Commitment to BIND as Open Source – Jeff Osborn Named President"},{"categories":["ISC"],"contents":"Happy 30th birthday to the GNU project!\nAccording to their announcement commemorating the event on the GNU.org site, September 27th, 1983 was the day that Richard Stallman first announced the GNU project to the public.\nToday the open source software movement that GNU pioneered is vibrant, thriving, and global. It counts among its members a multitude of individuals and project groups motivated by an amazing variety of values and goals. Millions every day use open source software directly and innovation and competition from open source have spurred improvement even in projects that don\u0026rsquo;t share the open source philosophy.\nAt ISC we\u0026rsquo;re proud to be part of the open source movement that the GNU Project trailblazed. We\u0026rsquo;d like to salute our colleagues (and respected elder siblings) at GNU and congratulate them on their past 30 years of remarkable achievement. Well done, GNU, and thank you for all that you have done to make the world a better place.\n","permalink":"/blogs/happy-30th-birthday-gnu/","tags":["GNU","open source"],"title":"Happy 30th Birthday, GNU!"},{"categories":["BIND","DNS"],"contents":"ISC is excited to announce the release of BIND 9.9.4, featuring Response Rate Limiting (RRL), security patches, and bug fixes for DNSSEC, RPZ, and configuration modules. The latest dot release ensures the stability, robustness, and security of your critical Internet infrastructure.\nResponse Rate Limiting (RRL)\nA DNS DDoS attack works by forging queries that look like they came from the victim’s server, making it appear to be requesting a high volume of information. RRL enables server administrators to limit the rate at which their server will send replies to forged queries, thereby preventing it from contributing to the attack.\n“Our users have been asking for RRL to be incorporated into BIND,” said Kannan Ayyar, President of Internet Systems Consortium, “and we recognize the important role it plays in DDoS mitigation. With DDoS attacks increasing in both number and severity, we felt it was important to integrate RRL into a supported release.”\n“We have been testing RRL in limited release, and are now confident that it is ready for general use in BIND installations,” said Scott Mann, ISC’s VP of Engineering. “Third-party additions like RRL are possible because BIND is open source software. Now that it is fully implemented, we look forward to enhancing and building on RRL in future releases.”\nFor more information on RRL, visit the following links:\nDDoS Defense Module for BIND DNS - RRL (Webinar) A Quick Introduction to Response Rate Limiting Cache poisoning gets a second wind from RRL? Probably not. Commercial support for BIND and additional RRL functionality, RRL Classifier, is available to The DNS Company subscription customers; visit The DNS Company’s BIND Solutions to learn more.\nFor questions, suggestions and discussions relevant to BIND, participate in our community mailing list, available here.\nBIND 9.9.4 is available for download at our Downloads page.\n","permalink":"/blogs/bind-9-9-4-released/","tags":["BIND","DDoS","RRL"],"title":"BIND 9.9.4 Released"},{"categories":["BIND","DNS","Security"],"contents":"You may have heard recently that Response Rate Limiting (RRL) has re-opened the door on cache poisoning attacks.\nISC acknowledges that RRL can increase the effectiveness of cache poisoning attacks and appreciates the detailed research that uncovered it. This is, however, only one piece in the larger context of competing security concerns, and each operator will need to find their own balance of protection.\nFor those unfamiliar with it, RRL is designed to reduce the effectiveness of reflected denial of service (DoS) attacks which leverage DNS servers to amplify the attack. DNS servers are frequently used as amplifying reflectors in DoS attacks because attackers can send a small UDP query with a forged source address to the DNS server and get it to respond with a much larger answer to the target of the DoS. RRL reduces the effectiveness of these attacks by detecting when a large amount of similar traffic is being sent to a single target and suppressing responses.\nThis could, of course, potentially be used to create a different kind of DoS attack against a target if an attacker chooses to ask the same kinds of questions that the target is likely to ask. If this were done against all of the servers authoritative for a zone, then an attacker could potentially prevent the target from getting any answers at all for the zone.\nIn order to combat this risk, RRL was designed with a concept called \u0026ldquo;slip\u0026rdquo;. Slip comes into play after RRL starts suppressing responses, and works by allowing a specified fraction of responses to \u0026ldquo;slip\u0026rdquo; through the suppression. These responses contain none of the actual answer data, but do have the truncation (TC) bit set in the header of the response in order to tell the client to retry over TCP. This enables legitimate clients to get answers via TCP, which has a lot more overhead than UDP but is not vulnerable to source address forgery.\nISC\u0026rsquo;s RRL implementation will debut in our upcoming 9.9.4 release. Like the redbarn.org patches that preceded our implementation, we have chosen a default slip value of \u0026ldquo;2\u0026rdquo;, meaning that TC answers will be sent to the client/target one time in two. The other half of the time the queries will go unanswered.\nIt is these unanswered queries that create the increased opportunity for cache poisoning by giving an attacker a larger time window in which to get a forged reply with poison data to the victim. The data that we\u0026rsquo;ve seen indicate that\u0026ndash;for a reasonably-configured resolver\u0026ndash;it takes, on average, more than sixteen hours of 100Mbps of forged answers in order to get the resolver to accept a poisoned answer. During this time, the resolver is being flooded with traffic, which is usually a very visible event. We have not seen any analysis of the expected time for a \u0026ldquo;stealthy\u0026rdquo; cache poisoning attack, but we expect it to be significantly longer.\nThe researchers who discovered this have recommended a slip value of \u0026ldquo;1\u0026rdquo;, sending a TC answer for every response that RRL decides needs to be suppressed. This reduces the effectiveness of cache poisoning attacks while increasing the effectiveness of using DNS to amplify DoS attacks.\nNote that this analysis only applies to queries and responses that are affected by RRL, while anything that causes the legitimate packets to be dropped, even simple traffic congestion, will benefit someone attempting cache poisoning. Therefore, modifications to the behavior of RRL can have, at best, a limited effect in defending against cache poisoning. The best defense is for authoritative server operators to sign their zones with DNSSEC, and for resolver operators to validate responses.\nThe bottom line is that there is no clear \u0026ldquo;right\u0026rdquo; answer here. Both concerns are valid and the mitigation for one increases the risk of the other.\nWe believe, based on what we know of the current state of the Internet, that a slip value of \u0026ldquo;2\u0026rdquo; is closer to the theoretical \u0026ldquo;sweet spot\u0026rdquo; in addressing both risks than a slip value of \u0026ldquo;1\u0026rdquo; is, which is why we are keeping \u0026ldquo;2\u0026rdquo; as our default. Since RRL is not enabled by default even when it is compiled into BIND, and the slip value is a configurable option, we believe that this provides the most useful default value while giving individual operators the freedom to choose the risk balance that they are comfortable with.\nFinding the right risk balance also includes considering the effect that other features (e.g. DNSSEC) have on amplification potential and resistance to cache poisoning.\nThose who are interested in learning more about how we got into this situation and where we ought to go from here may want to check out Paul Vixie\u0026rsquo;s blog post \u0026ldquo;On the Time Value of Security Features in DNS\u0026rdquo;.\nEdit (1 October, 2013): This article previously stated, incorrectly, that during a cache poisoning attack the server would be unable to resolve names for the domain under attack.\n","permalink":"/blogs/cache-poisoning-gets-a-second-wind-from-rrl-probably-not/","tags":["BIND","DNS","CVE","DDoS","RRL","security"],"title":"Cache Poisoning Gets a Second Wind From RRL? Probably Not."},{"categories":["BIND","DNS","Security"],"contents":"Internet Systems Consortium (ISC) announces that the RRL module, currently the most effective defense against the use of DNS in Distributed Denial of Service attacks, is now part of the upcoming BIND release.\nA DNS DDoS attack works by forging queries that look like they came from the victim’s server, making it appear to be requesting a high volume of information. RRL enables server administrators to limit the rate at which their server will send replies to forged queries, thereby preventing it from contributing to the attack. The frequency of DNS DDoS attacks has been increasing, rising by 20% in Q2 of 2013. In an average attack 50 million packets per second are beamed at the victim. As attacks increase, RRL is the best defense available.\n“Our users have been asking for RRL to be incorporated into BIND,” said Kannan Ayyar, President of Internet Systems Consortium, “and we recognize the important role it plays in DDoS mitigation. With DDoS attacks increasing in both number and severity, we felt it was important to integrate RRL into a supported release.\u0026quot;\n“We have been testing RRL in limited release, and are now confident that it is ready for general use in BIND installations,” said Scott Mann, ISC’s VP of Engineering. “Third-party additions like RRL are possible because BIND is open source software. Now that it is fully implemented, we look forward to enhancing and building on RRL in future releases.”\nFor more information on RRL, visit the ISC Knowledgebase at https://kb.isc.org/docs/aa-01000.\n","permalink":"/blogs/isc-adds-ddos-defense-module-to-bind-software/","tags":["BIND","DNS","DDoS","RRL"],"title":"ISC Adds DDoS Defense Module to BIND Software"},{"categories":["ISC","Company News"],"contents":"THIS IS AN HISTORICAL BLOG POST. ISC is no longer doing business as DNSco.\nInternet Systems Consortium (ISC) announces that it has sold its security-related assets to Farsight Security, Inc. (“Farsight”), a new company started by ISC founder, Paul Vixie. The DNSDB and SIE services developed by ISC over the past five years will now be provided by Farsight.\n“Paul Vixie has been the driving force in Internet security innovation at ISC for many years,” said Kannan Ayyar, President of Internet Systems Consortium. “We are pleased that he will be taking these technologies forward and providing ongoing leadership in the security space. We look forward to his continued innovation there. Paul Vixie has been part of ISC for 18 years; with this new venture he has ended all of his involvement with ISC. We are grateful for his many contributions and share his excitement about Farsight.”\nThis is the second time this year that ISC has divested certain assets to a commercial for-profit entity. ISC is doing this to increase clarity and focus for its non-profit Internet mission, which includes running F-Root, providing hosting and Internet services for non-profits, researching and developing new ideas for the Internet, and of course providing free world-class nameserver and DHCP software for the benefit of the Internet.\nIn April of this year, ISC launched DNSco (http://dns-co.com), a wholly-owned subsidiary delivering a full suite of commercial services for BIND and ISC DHCP. Farsight, unlike DNSco, is a privately held company that is independent of ISC.\n“Farsight security technologies and service delivery capabilities are the result of years of research at ISC,” said Paul Vixie. “During that time we iterated on a range of business models and technology prototypes yielding best-in-class Internet security solutions that will help make the Internet a safer place.”\nAbout ISC\nInternet Systems Consortium is a non-profit 501(c)(3) public benefit corporation widely known for world-class Internet software engineering and network operations. ISC produces only open-source software with emphasis on Internet core technology, of which BIND and ISC DHCP are the two best-known examples. ISC\u0026rsquo;s Managed Open Source process ensures the quality of this software while keeping it open and available.\nISC operates high-reliability global networks of DNS root servers (F-root) and authoritative DNS servers (SNS@ISC) both for non-profit and for commercial enterprises. ISC is also very involved in ongoing Internet protocol and standards development, particularly in the areas of DNSSEC and IPv6. ISC is supported by donations from generous sponsors, by program membership fees, and by increasing revenues from for-profit subsidiaries. For program or donation information, please visit our website at https://isc.org.\nAbout Farsight Security, Inc.\nFarsight is a privately held Delaware corporation exclusively focused on the development of leading edge security solutions for ISPs, network and system security solution providers, governments, and medium to large commercial companies. Leveraging its superior telemetry data collection and processing capabilities, Farsight provides its clients with cloud-based, real-time network observability and reporting solutions.\nLike ISC before it, Farsight is committed to sharing its security-related telemetry data with security industry partners and academic researchers at nominal, non-discriminatory subscription rates. In support of its mission as a clearing house for such data, Farsight invites network operators and commercial clients to provide additional telemetry data, which will increase the volume, quality and accuracy of its data provision services thus improving the overall safety of the Internet as a viable commercial marketplace.\nFurther information about it can be found at https://www.farsightsecurity.com.\n","permalink":"/blogs/isc-spins-off-its-security-business-unit/","tags":["ISC","Farsight","Paul Vixie"],"title":"ISC Spins Off Its Security Business Unit"},{"categories":["ISC","Company News"],"contents":"ISC congratulates its board members Stephen Wolff and David Farber on their induction into the Internet Hall of Fame. Thank you for all you’ve done for ISC and the Internet!\nInternet Hall of Fame\n","permalink":"/blogs/isc-board-members-inducted-into-the-internet-hall-of-fame/","tags":["board","Internet Hall of Fame"],"title":"ISC Board Members Inducted Into the Internet Hall of Fame"},{"categories":["BIND","DNS","Security"],"contents":"Last night, TechCrunch reported that LinkedIn and Fidelity.com faced an outage due to a DNS error. ISC staff and colleagues observed that the error was caused due to the changing of nameserver information at the registry, leading to DNS queries to be directed to nameservers that did not correctly answer those queries. Suzanne Woolf, ISC\u0026rsquo;s Director of Strategic Partnerships, points out \u0026ldquo;the problem was apparently aggravated by a long TTL on the incorrect nameserver records, causing the bad information to persist in resolver caches and the misdirection of queries to continue far longer than such information is usually held\u0026rdquo;. Eric Ziegast, member of the security products group for ISC, also observed via DNSDB that over 40 other domains have been affected as well.\nIt doesn\u0026rsquo;t appear, based on current observations, that this incident was due to malicious activity. However, ISC staff have identified multiple domains hosted by the registrar that are still having DNS queries for them directed to the wrong nameservers, as caches in recursive DNS resolvers all over the Internet have continued to hold incorrect records for them. This data distributed in error could persist in recursive servers, by some reports, for up to two days from the original incident before they time out, meaning that end-users who rely on those servers might continue to be unable to reach the affected domains.\nISC Support staff have identified some steps that operators of recursive servers based on BIND can take to mitigate this issue by removing the bad data from their caches. The article is publicly available in our Knowledgebase, titled \u0026lsquo;How do I flush or delete incorrect records from my recursive server cache?\u0026rsquo;\nIt\u0026rsquo;s also worth pointing out that this case is not exactly the same as \u0026lsquo;cache poisoning,\u0026rsquo; as the cached data was not introduced by a third party but published, at one time, as authoritative data.\n","permalink":"/blogs/hijacking-dns-error-ddos-what-happened-and-what-you-can-do/","tags":["DDoS","security"],"title":"Hijacking DNS? Error? DDoS? What Happened, and What You Can Do"},{"categories":["BIND","DNS","Security"],"contents":"Unexpected DNSSEC validation failures ISC was recently involved in the troubleshooting and diagnosis of a DNSSEC-validation interoperability issue between BIND 9 and PowerDNS, where BIND is acting as a recursive server and PowerDNS is authoritative. The end result was that BIND marked the PowerDNS server as not supporting EDNS0. Since DNSSEC requires EDNS0 support, the PowerDNS server was no longer considered capable of answering DNSSEC questions, and therefore, the BIND server was not able to resolve the DNSSEC-signed domains served by the PowerDNS authoritative servers.\nEDNS0 is a DNS extension that has been part of the protocol since 1999, and was originally described in RFC 2671 (since obsoleted by RFC 6981). BIND 9, like most other recursive resolvers, supports this extension, and implements logic to work around DNS servers and network devices that do not understand EDNS0, or that do not behave properly if they do understand EDNS0. BIND must handle competing objectives when processing query responses from an authoritative server. If BIND sends an EDNS0-enabled query to a server and does not get a well-formed answer, it will try sending the query again, only this time without EDNS0. If an answer arrives, then the server is considered to work - but not with EDNS0.\nThe Issue in Detail In the specific case described here, sometimes the PowerDNS authoritative server was returning an answer, but the response packet was malformed. Here is how the DNS packet flow transpired:\nAt this point, the BIND server has received a well-formed answer from the PowerDNS server - with EDNS0 disabled - so it marks the server as being able to answer, but this time without EDNS0.\nNow, the main impact of marking a server as not being able to support EDNS0 is that DNSSEC requires EDNS0. This means that if all authoritative name servers for a domain are running PowerDNS and have the same error, then a DNSSEC-validating BIND 9 recursive server will not be able to resolve that DNSSEC-signed domain.\nFurther, this means that DNSSEC validation will no longer be possible for any of those servers\u0026rsquo; DNSSEC-signed domains; if they are hosting thousands of domains secured by DNSSEC, all of them will become unresolvable.\nMitigation From an administrative standpoint (i.e. what should the hapless DNS admin do about this), the solution is to either patch PowerDNS authoritative servers per the link above, or upgrade PowerDNS to version 3.3 RC. BIND administrators do not need to take any action unless they are running a validating recursive server that is already returning SERVFAILs due to marking the authoritative servers for the domains as EDNS0-incapable. In that situation, flushing the cache or restarting the server will restore normal service. It\u0026rsquo;s also particularly important in this case to be running a current version of BIND, as there have been bug fixes to cache management that are applicable to this situation - one in particular that ensures that an authoritative server\u0026rsquo;s EDNS0 capability (and other information) is refreshed after holding it for 30 minutes. Prior to this fix, authoritative server details were never discarded whilst the server was still being queried regularly.\nYet, while the actual fix for this situation is to upgrade the affected PowerDNS authoritative servers, it is theoretically possible for BIND to handle such partially-working servers better. One fix would be to always use EDNS0 when trying to get DNSSEC information from an authoritative server, rather than checking the server\u0026rsquo;s current status. This is what other resolvers (e.g. Unbound) do. The theory here is that if it works, then the user gets an answer; and if it does not work, it is no worse than not sending the query at all. Unfortunately, BIND 9\u0026rsquo;s current design does not make such information available when queries are sent. Changing the design to get around unusual situations like this would require a significant amount of engineering work, along with comprehensive functional and regression testing.\nBIND could also try things such as requiring multiple failures, or shortening the time period before rechecking a server for EDNS0 support. These minor tweaks might help, and would not require major code changes. But they may also hurt in certain other cases, and again, would require significant retest effort. Ultimately, the BIND team decided not to make any code changes for this particular condition, since a fix exists for the PowerDNS server bug that is part of the problem, and this entirely mitigates the issue from a functional standpoint.\nLooking Forward Significant changes are pending in the EDNS0 handling code for BIND 9.10 which will make BIND behave better in its handling of EDNS0 in general. This should certainly improve the situation, but highlights one of the issues with changing protocols, and changing existing implementations. Postel\u0026rsquo;s Law states: \u0026ldquo;Be conservative in what you do, be liberal in what you accept from others.\u0026rdquo; BIND - and indeed all other production recursive resolvers - must be fairly liberal in handling implementation quirks.\nThe unfortunate truth is that, because of the way DNS works, it is often the resolver server\u0026rsquo;s operator who has to deal with the impact of broken authoritative servers. Even when the authoritative servers are responding correctly, the effects of an earlier problem can persist in cache and may require special administrator (or script) intervention. Until we discover reliable techniques to handle for the majority of situations, we\u0026rsquo;re stuck muddling through occasional breakages. This ISC Knowledgebase article may help those who need to handle those situations.\n","permalink":"/blogs/thoughts-on-a-recent-bind-interop-issue/","tags":["BIND","DNSSEC"],"title":"Thoughts on a Recent BIND Interop Issue"},{"categories":["ISC","Company News"],"contents":"NOTE THAT THIS IS A HISTORICAL DOCUMENT. ISC is no longer doing business as DNSco.\nBeijing, China - 9 April 2013. Internet Systems Consortium (ISC) announces the creation of its first commercial subsidiary, DNSco.\nDNSco combines a business view and full-service solution with ISC\u0026rsquo;s world-renowned technical excellence. For over 15 years ISC has been the world\u0026rsquo;s leading expert in DNS and related technologies such as DHCP. ISC employees are versed in every aspect of these technologies, from protocol to implementation to operations to registries to policy. As the leader in these core Internet technologies and as the author and distributor of BIND and other core software, ISC has frequently been asked to provide business services to enterprises that recognize their dependence on DNS and DHCP.\n\u0026ldquo;ISC realized that we could be more responsive to our commercial customers while remaining true to our original mission of improving and supporting Internet growth and stability,\u0026rdquo; said Kannan Ayyar, President of Internet Systems Consortium. \u0026ldquo;We are separating our globally respected public-benefit activities from our growing business activities. The mission of our new DNSco subsidiary is to provide credible, compatible, economical, stable, and expert support and services to businesses for which ISC’s software and technologies are mission-critical.\u0026rdquo;\nDNSco is entirely owned by ISC, but will be managed separately. Its mission is different, its financials are separate, and it will be run as a commercial entity. But DNSco remains an ISC company, which means that expertise, staff, experience, and culture will move back and forth between the two companies. Separating commercial and public-benefit activities enables us do a better job of both. Moreover, profits from DNSco will help fund ISC’s ongoing public benefit mission.\nDNSco\u0026rsquo;s products are subscriptions for major ISC software titles, training and consulting for the technologies involved, subscriber-only enhancements to the software, and software support related to their operation as well as advance notifications of security vulnerabilities.\nIn addition DNSco will release various closed-source software products, tools and services, including the newly launched back-end registry service. If DNS technologies are important to your business, or if they are your business, you will find DNSco products and services to be critical to your success.\nVisit DNSco at http://www.dns-co.com.\nMedia Contact:\nBrian Reid\nreid@isc.org\n+1 650 423 1327\n","permalink":"/blogs/isc-creates-commercial-subsidiary-dnsco-2/","tags":null,"title":"ISC Creates Commercial Subsidiary: DNSco"},{"categories":["DNS","BIND","Security"],"contents":"It has been a very eventful week in the field of DNS operations. In addition to the BIND vulnerability disclosed by ISC this week, the DNS world has been buzzing with news about \u0026ldquo;the biggest Distributed Denial of Service attack to date,\u0026rdquo; directed against Spamhaus by parties critical of their decision to list Cyberbunker as a spam source. As an industry leader in the field of DNS software, ISC sees the Spamhaus DDOS as a perfect opportunity to remind DNS operators why it is important to not operate an \u0026ldquo;open\u0026rdquo; recursive resolver, a policy recommendation we have been making since 2005.\nA significant component of the DDOS traffic targeted at Spamhaus is coming from a technique that has been known for years \u0026ndash; a variety of reflection attack commonly known as a \u0026ldquo;DNS amplification attack.\u0026rdquo; By relying on the fact that an answer to a DNS query can be much larger than the query itself, attackers are able to both amplify the magnitude of the traffic directed against a DDOS victim and conceal the source of the attacking machines.\nTo accomplish this, the attacker sends a DNS query a few bytes in size to an open resolver, forging a \u0026ldquo;spoofed\u0026rdquo; source address for the query. The open resolver, believing the spoofed source address, sends a response which can be hundreds of bytes in size to the machine it believes originated the request. The end result is that the victim\u0026rsquo;s network connection is hit with several hundred bytes of information that were not requested. They will be discarded when they reach the target machine, but not before exhausting a portion of the victim\u0026rsquo;s network bandwidth. And the traffic reaching the victim comes from the open resolver, not from the machine or machines used to initiate the attack. Given a large list of open resolvers to reflect against, an attacker using a DNS amplification attack can hide the origin of their attack and magnify the amount of traffic they can direct at the victim by a factor of 40 or more.\nDNS operators who operate open resolvers without taking precautions to prevent their abuse generally believe they are harming nobody, but as the Spamhaus DDOS proves, open resolvers can be effortlessly coopted by attackers and used in criminal attacks on third parties.\nBeginning in 2005, ISC began publicly advocating for operators to stop operating open resolvers. In 2007 we changed the behavior of BIND, the world\u0026rsquo;s most popular nameserver software, so that open resolvers would no longer be the default. And in 2008 ISC CTO Joao Damas co-authored RFC 5358, \u0026ldquo;Preventing Use of Recursive Nameservers in Reflector Attacks.\u0026rdquo; For 8 years now we\u0026rsquo;ve been consistently leading on this issue as part of our mission to strengthen the DNS infrastructure, improve network security, and contribute to a stable and open internet. But there are still many open resolvers in operation. In order to avoid being pressed into service as an unwitting pawn in a criminal conspiracy, ISC strongly advises that DNS operators make use of the security features in BIND to enforce reasonable access permissions on their recursive resolvers.\nAt ISC, supporting the health of the Domain Name System and improving the security and stability of an open internet are core values and the biggest part of our public mission. If you would like to know more about us and our efforts, follow us on Twitter or LinkedIn, or get in touch via our contact page.\nNext week we\u0026rsquo;ll have more to share about how current ISC development efforts are targeting reflection attacks and other network abuses to create a better Internet for everybody.\n","permalink":"/blogs/is-your-open-dns-resolver-part-of-a-criminal-conspiracy-2/","tags":["BIND","DNS","DDOS","vulnerability","security"],"title":"Is Your Open DNS Resolver Part of a Criminal Conspiracy?"},{"categories":["ISC","Company News"],"contents":"ORLANDO, FL\u0026ndash;(Marketwire - Mar 12, 2013) - Internet Systems Consortium (ISC) is delighted to announce the launch of the Open Home Gateway Forum with an initial grant from Comcast. The OHGF is a forum of ISPs and vendors and Internet development organizations, initiated and spearheaded by ISC, that aims to improve the rollout of new Internet technologies to home networks by providing stable, quality-assured reference Open Source software to be used in home gateways. Home gateways are the means by which a residential customer connects to an Internet service provider.\nISC has nearly two decades of experience in open source software development and the processes that surround it, including not just development, but release management, testing, bug management, and support. ISC\u0026rsquo;s initial plans for home gateway software development include better support for IPv6, a smooth and secure mechanism to update the gateway\u0026rsquo;s firmware, and achieving better throughput and stability. Gateways are now first-class devices in the home, and they should have the software quality and functionality that people have come to expect from a first-class device.\n\u0026ldquo;The edge device in homes is growing in function and importance,\u0026rdquo; said Kannan Ayyar, president of Internet Systems Consortium. \u0026ldquo;It must meet the needs of the consumer as well as the service provider. A home gateway needs to offer security, choice, speed, and flexibility. We are confident that the new devices enabled by the work of the Open Home Gateway Forum will be of tremendous value to Internet service customers as well as to their service providers.\u0026rdquo;\n\u0026ldquo;Comcast is pleased that the Open Home Gateway Forum is one of the first programs underwritten with the new Comcast Technology Research \u0026amp; Development Fund,\u0026rdquo; said Jason Livingood, Vice President of Internet and Communications Engineering at Comcast. \u0026ldquo;We think this sort of open source development can make a meaningful difference in home networks and we\u0026rsquo;re committed to doing what we can to make the Forum a success.\u0026rdquo;\nAbout ISC\nInternet Systems Consortium (ISC) is a 501(c)3 public benefit corporation widely known for world‐class Internet software engineering and network operations. Founded in 1994 under an initial grant from UUNET, ISC is governed today by a 5-member Board of Directors. ISC software, of which BIND and ISC DHCP are the two best‐known examples, is open source. Our passion is Internet core technology. Our widely imitated Managed Open Source process ensures the quality of our software while keeping it completely open and available. ISC operates high‐reliability global networks of DNS root servers (F‐Root) and authoritative DNS servers both for non‐profit and commercial enterprises. ISC is actively involved in Internet protocol and standards development, particularly in the areas of DNSSEC and IPv6. ISC is supported by donations from generous sponsors, by program membership fees, and by increasing revenues from for-profit subsidiaries. For further information, please visit https://www.isc.org/.\nAbout Comcast Corporation\nComcast Corporation ( NASDAQ : CMCSA ) ( NASDAQ : CMCSK ) (https://www.xfinity.com/) is one of the nation\u0026rsquo;s leading providers of entertainment, information and communication products and services. With 23.8 million cable customers, 15.7 million high-speed Internet customers, and 7.4 million Comcast Digital Voice customers, Comcast is principally involved in the development, management and operation of cable systems and in the delivery of programming content.\nComcast\u0026rsquo;s content networks and investments include E! Entertainment Television, Style Network, Golf Channel, VERSUS, G4, PBS KIDS Sprout, TV One, ten sports networks operated by Comcast Sports Group and Comcast Interactive Media, which develops and operates Comcast\u0026rsquo;s Internet businesses, including Comcast.net.\nContact:\nMedia\nBrian Reid\nreid@isc.org\n+1 650 423 1327\n","permalink":"/blogs/isc-launches-open-home-gateway-forum/","tags":["ISC","OHGF"],"title":"ISC Launches Open Home Gateway Forum"},{"categories":["DNS","BIND","DHCP","Training"],"contents":"If you\u0026rsquo;re like the majority of our training participants, your career in DNS* started in a similar fashion. Your knowledge of the protocol, the resilience of your DNS infrastructure, whether or not you like to use valid shortcuts in config files and so on, are often very dependent upon which team you found yourself in when you first landed in the DNS world. For the more fortunate among us, the team was \u0026ldquo;RFC aware,\u0026rdquo; had a book or two on DNS and existing, working configurations from which to learn. At the other end of the spectrum, some of us have landed where the mention of DNS just elicited a finger pointing to an old dusty machine under an unoccupied desk.\nAmazingly, no matter the landing point, folks usually don\u0026rsquo;t find their way to a training until they have a few years of DNS experience. This speaks volumes about the protocol design and implementations. It also says a lot about the adaptability and tenacity of DNS admins. The critical nature of our \u0026ldquo;DNS inheritance\u0026rdquo; becomes very clear, possibly more quickly than we\u0026rsquo;d like. One does what needs to be done to resolve and prevent issues in a timely manner. Unfortunately, these qualities often produce a \u0026ldquo;why invest training dollars when you do such a fine job with DNS\u0026rdquo; hurdle for admins when trying to justify taking a formal course. (Please forward this blog for possible help with that\u0026hellip;)\nWhatever the length and depth of your DNS experience, an ISC training course is an excellent opportunity to expand your understanding. We discuss the original protocol design goals and how implementation has played out in the real world. Our intensive hands-on labs offer you a fresh environment where you build, configure, implement, and troubleshoot. From experienced DNS admins we often hear comments such as \u0026ldquo;so that is how that works,\u0026rdquo; \u0026ldquo;wish I had known that trick years ago,\u0026rdquo; and \u0026ldquo;this really filled in the gaps for me.\u0026rdquo; For those newer to DNS, you will leave with solid footing and plenty of tips and practice to make sure your DNS behaves as it should.\nWe offer public trainings around the world with our training partner, Men \u0026amp; Mice. For our most up-to-date schedule of public training sessions, please visit: https://www.isc.org/technicalservices.\nWe also offer private, on-site trainings. Please inquire at training@isc.org if interested.\n* Please re-read this blog substituting \u0026ldquo;DHCP\u0026rdquo; everywhere you see \u0026ldquo;DNS.\u0026rdquo; We promise it works, it\u0026rsquo;s true and it\u0026rsquo;ll be fun!\nIf you are an executive or manager whose business model relies on DNS, perhaps you\u0026rsquo;ve said similar words. Often, \u0026ldquo;DNS Just Works\u0026rdquo; and you are fortunate enough to maybe not even know who or what team handles this critical aspect of your business. If so, we invite you to consider how much you have invested in infrastructure not specifically related to DNS. Now, think about your return on that investment should DNS no longer function or become even occasionally unreliable. Does your recurring investment in DNS match its importance to your business? If not, ISC training is a good place to begin bridging that gap.\n","permalink":"/blogs/oh-you-know-unix-good-we-need-you-to-run-our-dns/","tags":["DNS","BIND","DHCP","training","certification"],"title":" “Oh, you know UNIX? Good, we need you to run our DNS.” "},{"categories":["DNS","BIND"],"contents":"As per ISC\u0026rsquo;s Software Support Policy, we have designated the 9.7 branch of BIND 9 as End of Life. This means there will be no further bug fixes for this branch. We urge all customers running 9.7 or earlier to upgrade to our 9.9 branch.\nBIND 9.9 includes important bug and security fixes, and ISC has made a public commitment to designate BIND 9.9 as our next Extended Support Version (ESV), ensuring that it will receive critical updates for years to come.\nThe 9.6-ESV branch will be EOL’d by June, 2013.\nISC released 9.9.0 in February of 2012, and its features were highlighted in these two blogs by Evan Hunt: ISC BIND 9.9.0a1 — Feature Preview and BIND 9.9.0b2 - A Feature Update.\nAdditionally, ISC\u0026rsquo;s CTO, Joao Damas, wrote an informative and important blog: BIND 9.9 - Why Should You Care? \u0026hellip; and we hope you do care.\nThe full list of improvements to each version can be tracked in our release notes for BIND, but for operators who are considering switching to 9.9 for the first time here are some of the improvements we think you will especially appreciate.\nIf you're using DNSSEC you'll find BIND 9.9 features significant improvements to DNSSEC usability and functionality, including inline signing, the easiest method for zone signing yet, support for a variety of other signing methods to suit your work flow, new tools to help check your zones for DNSSEC compliance, and support for new signing algorithms. All operators of BIND 9.9 can benefit from the performance work we've done, especially improved worker thread handling and better tuning of default parameters to achieve superior performance \"out of the box.\" BIND 9.9 supports NXDOMAIN redirection, a feature requested by many of our customers and now available for operators wishing to use this feature. BIND 9.9 represents a substantial evolution of BIND 9. ","permalink":"/blogs/out-with-bind-9-7-and-in-with-bind-9-9/","tags":["DNS","BIND","BIND 9.9"],"title":"BIND – Out With the Old, and In With the New"},{"categories":["DNS","BIND"],"contents":"BIND 9.9 is a new release of the gold standard for DNS servers on the Internet. It builds on a tried and trusted platform that has been evolving and maturing over more than 10 years and has kept adding new powerful and useful features with each new release.\nIn BIND 9.9 we have introduced several new features that can make a difference to how you operate your DNS service, no matter what size of an installation you have. Here is a brief rundown of why you should care about this new version:\nImproved performance\nBIND 9.9 improves performance in two main areas:\nRe/Start speed. If you have lots of zones, you will see speedups in start/reconfig/reload times between 3x-20x.\nBetter I/O. When using a multithreaded build of BIND9.9 on a multicore machine, the work we have done on I/O optimisation will get you better performance for DNS query handling. You get this improvement automatically if you are using threads. To find out if you are, and a few more details about what is behind this, have a look at ISC’s Knowledgebase article at Performance: Multi-threaded I/O.\nTrying to do DNSSEC but want to minimise changes?\nIf you have been thinking about deploying DNSSEC in your authoritative server but found it would change your established workflow too much, we have good news. BIND 9.9 introduces a feature called inline signing. This allows you to drop a BIND 9.9 nameserver at any place in your DNS publication workflow to get your zones signed without having to change things that are already in place. It works by having BIND transfer in and unsigned copy of your DNS zones, handle the signing in a single spot, and make available a signed copy of the zone out the other end. If you drop this between your zone generation process and your current nameservers, it allows for transparent signing by just inserting this one additional step, rather than modifying what you already have. You can look at a few useful examples in ISC’s Knowledgebase at Inline Signing in ISC BIND 9.9.0 \u0026ndash; Examples.\nRedirection of non-existing names\nWith the pressure on revenue that everyone is seeing these days, ISPs have been resorting to the use of redirection of DNS queries that return non-existent domains to some ISP-defined user help information. While we understand the desire for this feature, ISC also firmly believes in the need for DNSSEC as a way to provide users with protection online. In that spirit our NXDOMAIN redirection implementation allows ISPs to implement their business models but preservers the integrity of DNSSEC-enabled users by disabling NXDOMAIN redirection if the user requests DNSSEC validation for DNSSEC-signed data.\n","permalink":"/blogs/bind-9-9-why-should-you-care-2/","tags":["DNS","BIND","BIND 9.9"],"title":"BIND 9.9 – Why Should You Care?"},{"categories":["DNS","BIND"],"contents":"Today, ISC is publishing a new beta release of BIND 9.9.0. As several new features have been added since the feature preview I posted on the occasion of the first alpha release, it would seem to be a good time for an update.\nInline signing\nThe new \u0026lsquo;inline-signing\u0026rsquo; option, in combination with the \u0026lsquo;auto-dnssec\u0026rsquo; option that was introduced in BIND 9.7, allows named to sign zones completely transparently. Before now, automatic zone signing only worked on master zones that were configured to be dynamic; now, it works on any master or slave zone.\nIn a master zone with inline signing, the zone is loaded from disk as usual, and a second copy of the zone is created to hold the signed version. The original zone file is not touched; all comments remain intact. When you edit the zone file and reload, named detects the incremental changes that have been made to the raw version of the zone and applies those changes to the signed version, adding signatures as needed.\nA slave zone with inline signing works similarly, except that instead of loading the zone from disk and then signing it, the slave transfers the zone from a master server and then signs it. This enables \u0026ldquo;bump in the wire\u0026rdquo; signing: a dedicated signing server acting as an intermediary between a hidden master server (which provides the raw zone data) and a set of publicly accessible slave servers (which only serve the signed data).\nNote: A known bug in this release can cause master zones that use inline-signing to lose synchronization between the signed and unsigned versions. This will be addressed in a future release; in the meantime, this feature should be considered experimental. The problem has not been seen when using inline-signing with slave zones.\nOther DNSSEC improvements\nThe new \u0026lsquo;rndc signing\u0026rsquo; command provides greater visibility and control of the automatic DNSSEC signing process. When a zone is being signed, records are inserted into the zone indicating which keys are currently in the process of signing and which have finished (this enables named to resume the process correctly if there is a crash before the zone is fully signed). That state information is now visible:\n'rndc signing -list \u0026lt;zone\u0026gt;' shows the current state of signing operations. 'rndc signing -clear \u0026lt;key\u0026gt; \u0026lt;zone\u0026gt;' or 'rndc signing -clear all \u0026lt;zone\u0026gt;' can be used to remove the records that say a key has finished signing. (If a key is still in the process of signing, then its record cannot be removed.) 'rndc signing -nsec3param \u0026lt;parameters\u0026gt; \u0026lt;zone\u0026gt;' or 'rndc signing -nsec3param none \u0026lt;zone\u0026gt;' can be used to set or remove the NSEC3 parameters for a zone. If this is used on a zone that has not yet been signed, then the specified parameters will be stored for use when the zone is signed. Also, the new 'dig +rrcomments' option now provides more information about DNSKEY records, including each key's ID, algorithm, and function within a zone (key-signing key or zone-signing key), in order to help with troubleshooting of DNSSEC problems. Performance improvements\nLocking performance has been improved, particularly with regard to recursive cilents; this allows better scaling with large numbers of threads. Slave zones are now saved in raw format by default. This can significantly reduce restart time on servers with large numbers of slave zones. Security fix As 9.9.0b1 is vulnerable to a recently discovered security flaw, anyone beta-testing 9.9.0 should switch to 9.9.0b2. ","permalink":"/blogs/bind-9-9-0b2-a-feature-update-2/","tags":["DNS","BIND","BIND 9.9"],"title":"BIND 9.9.0b2 — A Feature Update"},{"categories":["ISC DHCP","DHCP"],"contents":"Currently the DHCPv6 protocol does not allow the provisioning of any routing-related information to hosts. A new proposal is addressing this shortcoming. The draft draft-ietf-mif-dhcpv6-route-option defines a mechanism for delivering routing information over the DHCPv6 protocol. This article explains core concepts and explains how to use the ISC DHCP software to deliver such information.\nA bit of history When the IPv6 protocol family was defined, it was decided that a new mechanism for discovering routers was necessary. Routers send Router Advertisement (RA) messages that contain their addresses and a list of routes that are reachable through them. These may contain additional information such as routes available directly on-link. This mechanism is an essential part of the Neighbor Discovery protocol. Currently, that is the only reasonable way of provisioning routing information to hosts. The other alternatives are usually avoided. Manually configuring each host in a network is very time consuming, error-prone and takes a lot of time, needing to be re-done in the event of any routing information changes. The last possible way of delivering routing information to hosts would be to participate in dynamic routing (like RIP, OSPF or BGP), but that is not feasible for hosts and problematic on several levels. Therefore nowadays most hosts are using RAs to configure their routing.\nProblem definition The RA-based approach works, but has several significant flaws:\nRAs are multicast to all hosts. There is no way to differentiate this information on a per host basis. For example, assume that one of the departments in a corporate network requires access to a specific network (e.g. software engineering need to connect to a lab). Another example would be an access network where regular users are using one router, while premium users are allowed to also use a second router. DHCPv6 requires RAs for proper operation. The DHCPv6 protocol assigns addresses but does not provide any information about routes. Imagine a separated network that does not have any routers. A DHCPv6 server assigns addresses to node A and node B. According to RFC3315, A and B cannot ping each other, even tough they have been assigned addresses from the same prefix. They do not have routing information, so they don't know how to reach each other. Users who are experienced with IPv4 and DHCPv4 operation, when they begin migration to IPv6, have one very common question: \"How do I specify routing information in DHCPv6?\" In mobile networks, a mobile node is expected to connect to a new network then wait for periodically announced RAs, then begin DHCPv6 if that is required. It may also request sending a RA immediately by sending Router Solicitation (RS) message, but that may not always work, due to restrictions placed on the routers regarding maximum number or RAs sent per specific period of time. Solution To solve those problems, several engineers involved in the Internet Engineering Task Force (IETF) decided to solve this problem. Draft-mif-dhcpv6-route-option defines a flexible way to provision routing information over DHCPv6. This draft is adopted by the Multiple Interfaces (MIF) Working Group and is currently going through Workgroup Last Call (WGLC) so it is expected to be approved and published soon. The proposed approach solves all of the aforementioned limitations. In particular, it allows configuration on a per host basis. It also allows configuration of the default route and availability of local on-link route(s). In a sense, it makes DHCPv6 complete, as it no longer requires RAs to operate.\nThere are several ways of delivering routing information to hosts.\nThe DHCPv6 server may inform hosts that a route X/Y is available via router Z. The lifetime of such route and its metric can be specified. The DHCPv6 server may inform hosts that a route X/Y is available locally, directly on-link. The lifetime of such route and its metric can be specified. In bandwidth constrained networks, a DHCPv6 server may inform clients that a default route is available via router X. This shorter option is discouraged unless there are significant bandwidth limitations. Implementation ISC provides several commercial quality open source software solutions, with ISC DHCP being one notable example. ISC DHCP is an implementation that supports both DHCPv4 and DHCPv6. It also offers a way to define custom options that are not supported directly. The following sections explain how to configure the existing ISC DHCP software to provision routing information to clients. The instructions were tested using the 4.2.3 release, but this approach should work on all currently supported versions from the 4.* family. Note that versions from the 3.x family do not support DHCPv6 and cannot be used for this purpose.\nServer configuration The following configuration file can be used to instruct a server to provision routing information to clients.\n# Example server configuration for routing parameters # as defined in draft-ietf-mif-dhcpv6-route-option-03 # This statement defines an option layout, not the values. # NEXT_HOP option with RTPREFIX option included option dhcp6.next-hop-rt-prefix code 242 = { ip6-address, unsigned integer 16, unsigned integer 16, unsigned integer 32, unsigned integer 8, unsigned integer 8, ip6-address }; # This statement configures actual values to be sent # RTPREFIX option code = 243, RTPREFIX length = 22 # lifetime = 9000 seconds # route 2001:db8:2::/64 # metric 1 option dhcp6.next-hop-rt-prefix 2001:db8:1::1 243 22 9000 64 1 2001:db8:2::; # Simplified mode (NEXT_HOP only, without RTPREFIX) for # bandwidth constrained networks. # Make sure that only simplified or full mode are uncommented, not both. # Uncomment this if you want to send simplified default-route information #option dhcp6.next-hop code 242 = ip6-address; #option dhcp6.next-hop 2001:db8:1::1; # Uncomment this if you want to provision information about routes # available directly on-link. # RTPREFIX option layout definition option dhcp6.rtprefix code 243 = {unsigned integer 32, unsigned integer 8, unsigned integer 8, ip6-address }; # RTPREFIX option values: 2001:db8:1:1234::/64 prefix # with lifetime 3600 available on-link, metric 1 option dhcp6.rtprefix 3600 64 1 2001:db8:1:1234::; # This is the usual definition of subnet with range allocated # for dynamic assignment. subnet6 2001:db8:1:15c::/64 { # Range for clients range6 2001:db8:1:15c::1:0 2001:db8:1:15c::1:ffff; # standard options option dhcp6.name-servers 2001:db8:1::1; option dhcp6.domain-search \"example.com\"; } Note that you must not mix full (NEXT_HOP with RTPREFIX option inside) and simplified (NEXT_HOP only) notation. Make sure that you have uncommented only one of them. Make sure that the client\u0026rsquo;s definition matches the server\u0026rsquo;s.\nTip: You may want to replace 2001:db8:2::/64 with ::/0 if you want to configure a default route using full notation.\nTip: When running this configuration, make sure that you use the -6 command-line flag (i.e., run the server in DHCPv6 mode, not in the default DHCPv4 mode) and that your machine running the server has an IPv6 address address assigned that belongs to subnet6.\nClient configuration The following configuration can be used by clients to retrieve routing information.\n# Conf file for dhclient6 script \"/path/to/your/client-script.sh\"; # Uncomment this if you want to request only default route #option dhcp6.option-next-hop code 242 = { ip6-address, encapsulate routing6 }; # Use this definition to provision default or any other specific route # with addtional information (metric and lifetime). # NEXT_HOP option with RTPREFIX option included option dhcp6.next-hop-rtprefix code 242 = { ip6-address, unsigned integer 16, unsigned integer 16, unsigned integer 32, unsigned integer 8, unsigned integer 8, ip6-address }; # Simplified mode (NEXT_HOP only, without RTPREFIX) # Uncomment this if you want to send simplified default-route information # Make sure that only simplified or full mode are uncommented, not both. #option dhcp6.next-hop code 242 = ip6-address ; # RTPREFIX option option dhcp6.rtprefix code 243 = {unsigned integer 32, unsigned integer 8, unsigned integer 8, ip6-address }; # This specifies list of options that client will request. request dhcp6.name-servers, dhcp6.domain-search, dhcp6.rtprefix, dhcp6.next-hop-rtprefix; The client will execute the specified script upon reception of this information. Several environment variables will be passed to that script. You should use their values to configure routing on your host. That varies between OSes, so please check the documentation for your system. For example, a client that receives routing information may pass the following variables:\nnew_dhcp6_next_hop_rtprefix='2001:db8:1::1 243 22 9000 64 1 2001:db8:2::' new_dhcp6_rtprefix='3600 64 0 2001:db8:1:1234::' Tip: If you are unsure about names of the variables, please consult the appropriate manual page. As a debugging assistance, you may also call the set command without arguments at the beginning of your script. It will print out all variables set.\nInteroperability Working drafts that are not yet published as RFCs do not typically have option values assigned. The presented examples use values 242 for NEXT_HOP and 243 for RTPREFIX option codes. Once the IETF approves this draft, IANA will assign actual option codes that are almost guaranteed to be different than those used in this example.\nISC is planning to provide direct support for NEXT_HOP and RTPREFIX options, without requiring users to resort to defining custom options. An exact schedule is not decided yet, as it depends on the standardization process of the draft in IETF.\nThe presented approach is non-proprietary and is interoperable with DHCP implementations from other vendors. It was tested to work with the development version of Dibbler, an open source DHCPv6 implementation.\n","permalink":"/blogs/routing-configuration-over-dhcpv6-2/","tags":["ISC DHCP","DHCP","DHCPv6"],"title":"Routing Configuration Over DHCPv6"},{"categories":["DNS","BIND"],"contents":"Yesterday afternoon, ISC published the first alpha release of BIND 9.9.0. This is an early technology preview, showing off some of the work we\u0026rsquo;ve been doing in BIND 9.\nThere will be more new features added in later alpha releases, but here\u0026rsquo;s what\u0026rsquo;s ready to debut now\u0026hellip;\nNew Features The big new item in 9.9.0a1 is NXDOMAIN redirection. This enables a resolver to respond to a client with locally configured information when a query would otherwise have gotten an answer of \u0026ldquo;no such domain\u0026rdquo;. This enables a recursive operator, for example, to provide alternate suggestions for misspelled domain names. (Names that are in DNSSEC-signed domains are exempted from this when validation is in use.)\nPerformance Improvements In addition to the start-up performance improvements that have already been released in BIND 9.8.1, BIND 9.9.0 improves query throughput on multi-processor systems by allowing multiple threads to listen for and process incoming queries.\nDNSSEC Improvements There have been several changes to improve DNSSEC usability:\nZones that were signed automatically via the 'auto-dnssec' option used to use NSEC by default, but could be switched to NSEC3 afterward. It is now possible to set NSEC3 parameters in these zones before they sign, skipping the NSEC3 step entirely. The 'dnssec-signzone -D' option causes dnssec-signzone to write DNSSEC data to a separate output file. This allows you to put \"$INCLUDE example.com.signed\" into the zonefile for example.com, run 'dnssec-signzone -SD example.com', and the result is a fully signed zone which did *not* overwrite your original zone file. Running the same command again will incrementally re-sign the zone, replacing only those signatures that need updating, rather than signing the entire zone from scratch. The 'dnssec-signzone -X' option allows signatures on DNSKEY records to have a different expiration date from other signatures. This makes it more convenient to keep your KSK on a separate system, and resign the zone with it less frequently. A new '-L' option to dnssec-keygen, dnssec-settime, and dnssec-keyfromlabel sets the default TTL for the DNSKEY record. The 'dnssec-signzone -R' option forces removal of signatures that are not expired but were created by a key which no longer exists. dnssec-dsfromkey can now read from standard input, making it easier to convert DNSKEY records to DS. For example, to get the DS records for isc.org, all that is necessary is: dig dnskey isc.org | dnssec-dsfromkey -f - isc.org (And stay tuned for BIND 9.9.0a2, which will include a new \u0026lsquo;inline-signing\u0026rsquo; option, allowing \u0026ldquo;bump in the wire\u0026rdquo; signing.)\nConfiguration Improvements The 'also-notify' option now takes the same syntax as 'masters', so you can use named lists of servers, and specify TSIG keys. The 'serial-update-method' option allows dynamic zones to have their SOA serial number set to the current UNIX time if desired, rather than simply incrementing the serial number with each change to the zone. Other Improvements The 'rndc flushtree' command clears all data under a given name from the DNS cache. So, for example, \"rndc flushtree example.com\" will not only remove example.com from the cache, but also www.example.com, mail.example.com, and so on. The 'rndc sync' command dumps pending changes in a dynamic zone to disk without having to go through a freeze/thaw cycle. 'rndc freeze' and 'rndc thaw' no longer remove the zone's journal file; this makes it possible to use ixfr-from-differences with a dynamic zone. ","permalink":"/blogs/isc-bind-9-9-0a1-feature-preview/","tags":["DNS","BIND","BIND 9.9"],"title":"ISC BIND 9.9.0a1 — Feature Preview"},{"categories":["DHCP","ISC DHCP"],"contents":"Dual Stack Lite is an architecture that allows IPv4 services to be provided in an IPv6 network, despite a limited amount of available IPv4 addresses. Work on DS-Lite was conducted within the Softwires working group in the IETF, and began in late 2008. After many revisions it was recently published as RFC6333, with its companion RFC6334 dedicated to automated configuration. Both authors of RFC6334 - David Hankins and Tomasz Mrugalski - worked or are currently working for ISC.\nIn a typical environment, the Internet Service Provider (ISP) usually deploys Customer Premises Equipment (CPE), a small home gateway that performs Network Address Translation (NAT), so the customer can connect several devices, e.g. a desktop computer, a laptop, and WiFi access point. This approach is very convenient, but has a significant drawback of requiring one IPv4 address for each customer. Due to a shortage of IPv4 addresses that approach is very problematic for many operators, especially the bigger ones.\nThe DS-Lite architecture however, differs from the classical IPv4 deployment model. Due to exhaustion of IPv4 address space, nowadays it is impossible to obtain new IPv4 addresses. To share one address between several customers, NAT had to be moved to a different location. Instead of translating packets on the ISP network border, NAT was moved deeper into the ISP network. IPv6 is used as a transport layer between the CPE and NAT. In DS-Lite nomenclature, a CPE performing IPv4 to IPv4-over-IPv6 encapsulation is called the Basic Bridging BroadBand (B4) element. The carrier-grade NAT element located deep within the ISP network is called the Address Family Transition Router (AFTR).\nTo leverage such an architecture, the B4 element has to learn address of an AFTR that will serve as a tunnel termination point. Manual configuration is not feasible in most cases, therefore an automated method was defined. The best way to deliver necessary information to B4 is using DHCPv6. RFC6334 defined a new option called AFTR_NAME that conveys the fully qualified domain name (FQDN) of an AFTR. The ability to convey a name rather than simply an address offers several benefits. The most desirable is to allow network operators to use a name that resolves to a different address for different clients, thus providing load balancing.\nISC is actively supporting deployment of IPv6 in general. In particular, it is involved in many transition technologies; Dual Stack Lite is one such technology. ISC DHCP already allows configuration of this option using a custom option. Dedicated support is planned for ISC DHCP 4.3.\nISC provides an open source, reference implementation of AFTR, as well as instructions for configuring a home gateway as B4. Our engineers are also involved in protocol and implementation development of associated technology called Port Control Protocol (PCP).\n","permalink":"/blogs/ds-lite-architecture-overview-and-automatic-configuration/","tags":["DHCP","IPv4","IPv6","Dual Stack Lite"],"title":"DS-Lite Architecture: Overview and Automatic Configuration"},{"categories":["BIND","DNS"],"contents":"One of the common complaints we\u0026rsquo;ve received over the years about BIND 9 is that large authoritative servers, particularly those with a very large number of small zones, are slow to launch. I\u0026rsquo;ve met some DNS operators who expressed a powerful aversion to upgrading their systems, because a shutdown and restart can literally take all day.\nIf that describes you, I have some good news. There is a simple optimization for BIND 9 that can dramatically improve your startup performance. New versions of BIND are being released soon to take advantage of it.\nI recently did some profiling experiments on a server with tens of thousands of small zones, and discovered that the delay was not, as I had expected, primarily caused by loading the server configuration and the zone database. In fact, named was spending the vast majority of its time repeatedly walking very long linked-lists. Further examination to find the reason for this revealed a simple but significant tuning bug that\u0026rsquo;s been overlooked for years: The zone tasks were massively overburdened.\nIn some ways, BIND 9 is almost like a miniature operating system. From the perspective of your real OS, named is just a single process\u0026hellip; but within named, there are more processes, all taking turns doing their jobs, then yielding control to the next miniature process. These internal mini-processes are called \u0026ldquo;tasks,\u0026rdquo; and they handle all the functions of the name server - sending queries, answering queries, cleaning the cache, and so on.\nEach zone served by a BIND 9 server has a task associated with it, whose job is to do all the routine maintenance for an authoritative zone: sending SOA requests to masters, sending NOTIFY messages to slaves, dumping dynamic zone data to disk, regenerating expiring DNSSEC signatures, and so forth. Since these functions don\u0026rsquo;t usually all happen at once, a single task can support many zones; too many zones and the task can be overwhelmed.\nIt turned out that the pool from which the zone tasks were assigned was fixed in size, and much too small. And the damage this did to startup performance was immense: On a test server with 8 processors and 12G of memory, a server with a million zones took well over ten hours to begin serving queries. And no wonder, because those million zones were sharing the resources of only eight zone tasks.\nWhen I tried increasing the size of the task pool, I expected to see a reduction in startup time. What I didn\u0026rsquo;t expect was a near elimination of startup time:\nnamed started up and began serving queries in a little over fifteen minutes, most of which was spent parsing the very large named.conf file. Loading the zones, a process that had taken over ten hours in the previous run, now took 2-3 minutes.\nA larger task pool does take more memory, but it\u0026rsquo;s negligible compared to the size of the zone data. If you serve hundreds of thousands or millions of zones, you can expect to see a factor-twenty improvement in startup time at the cost of about 2% more memory.\nThe single change to be made is in the file lib/dns/zone.c. When the function isc_taskpool_create() is called, the third argument — set to 8 in most versions of BIND — should be set to a number that\u0026rsquo;s roughly one one-hundredth of the number of zones you expect to be serving. (There is also a slight theoretical benefit if the number happens to be prime, though in practice the difference is quite small.)\nIf you\u0026rsquo;re running a million zones, you want about ten thousand zone tasks. 10007 happens to be prime. Changing the 8 to 10007 should dramatically improve your startup performance:\n--- zone.c.00 2011-07-12 08:56:34.000000000 -0700 +++ zone.c 2011-07-12 14:46:44.000000000 -0700 @@ -12455,8 +12455,7 @@ zmgr-\u0026gt;transfersperns = 2; /* Create the zone task pool. */ - result = isc_taskpool_create(taskmgr, mctx, - 8 /* XXX */, 2, \u0026amp;zmgr-\u0026gt;zonetasks); + result = isc_taskpool_create(taskmgr, mctx, 10007, 2, \u0026amp;amp;zmgr-\u0026gt;zonetasks); if (result != ISC_R_SUCCESS) goto free_rwlock; Better yet, though, don\u0026rsquo;t bother editing C files, and just install the newest releases of BIND 9. In the upcoming 9.8.1, which will have its third beta release this week, named counts the zones at startup time and automatically scales its zone task table accordingly.\nThe upcoming 9.6-ESV-R5 and 9.7.4 releases were already very close to final release when this trick was discovered. Since they\u0026rsquo;ve already been through beta, we decided we\u0026rsquo;d make a smaller, less-invasive change in those. When the final releases come out in the next week or so, you\u0026rsquo;ll be able to set an environment variable — BIND9_ZONE_TASKS_HINT — with your desired number of zone tasks.\nIn later releases of 9.6 and 9.7, we will backport the automatic scaling code, and the environment variable will no longer be necessary.\nHappy scaling!\n","permalink":"/blogs/a-major-improvement-in-bind-9-startup-performance/","tags":["BIND","DNS","performance"],"title":"A Major Improvement in BIND 9 Startup Performance"},{"categories":["BIND","DNS"],"contents":"ISC BIND 9.8.1b3 is now available. BIND 9.8.1b3 is the third beta release of BIND 9.8.\nThis document summarizes changes from BIND 9.8.0 to BIND 9.8.1b3. Please see the CHANGES file in the source code release for a complete list of all changes.\nDownload\nThe latest versions of BIND 9 software can always be found on our website at https://www.isc.org/download. There you will find additional information about each release, source code, and some pre-compiled versions for certain operating systems.\nSupport Product support information is available on https://www.isc.org/support.\nFree support is provided by our user community via a mailing list. Information on all public email lists is available at https://lists.isc.org/mailman/listinfo.\nNew Features\nAdded a new include file with function typedefs for the DLZ \u0026ldquo;dlopen\u0026rdquo; driver. [RT #23629]\nAdded a tool able to generate malformed packets to allow testing of how named handles them. [RT #24096]\nSecurity Fixes\nIf `named` is configured with a response policy zone (RPZ) and a query of type RRSIG is received for a name configured for RRset replacement in that RPZ, it will trigger an INSIST and crash the server. RRSIG. [RT #24280] `named`, set up to be a caching resolver, is vulnerable to a user querying a domain with very large resource record sets (RRSets) when trying to negatively cache the response. Due to an off-by-one error, caching the response could cause `named` to crash. [RT #24650] [CVE-2011-1910] Using Response Policy Zone (RPZ) to query a wildcard CNAME label with QUERY type SIG/RRSIG, it can cause `named` to crash. Fix is query-type independent. [RT #24715] Using Response Policy Zone (RPZ) with DNAME records and querying the subdomain of that label can cause `named` to crash. Now logs that DNAME is not supported. [RT #24766] Change #2912 populated the message section in replies to UPDATE requests, which some Windows clients wanted. This exposed a latent bug that allowed the response message to crash `named`. With this fix, change 2912 has been reduced to copy only the zone section to the reply. A more complete fix for the latent bug will be released later. [RT #24777] Feature Changes\nImproved the startup time for an authoritative server with a large number of zones by making the zone task table of variable size rather than fixed size. This means that authoritative servers with lots of zones will be serving that zone data much sooner. [RT #24406]\nMerged in the NetBSD ATF test framework (currently version 0.12) for development of future unit tests. Use configure \u0026ndash;with-atf to build ATF internally or configure \u0026ndash;with-atf=prefix to use an external copy. [RT #23209]\nAdded more verbose error reporting from DLZ LDAP. [RT #23402]\nThe DLZ \u0026ldquo;dlopen\u0026rdquo; driver is now built by default, no longer requiring a configure option. To disable it, use \u0026ldquo;configure\u0026ndash;without-dlopen\u0026rdquo;. (Note: driver not supported on win32.) [RT#23467]\nReplaced compile time constant with STDTIME_ON_32BITS. [RT #23587]\nMake \u0026ndash;with-gssapi default for ./configure. [RT #23738]\nBug Fixes\nThank you to everyone who assisted us in making this release possible. If you would like to contribute to ISC to assist us in continuing to make quality open source software, please visit our donations page at https://www.isc.org/donate.\n","permalink":"/blogs/isc-bind-9-8-1b3-provides-startup-performance-improvements/","tags":["BIND","DNS"],"title":"ISC BIND 9.8.1b3 Provides Startup-Performance Improvements"},{"categories":["DHCP","ISC DHCP"],"contents":"Numbering Computers in IPv6 Computers and other IPv6-enabled devices need a way to select which IP address they are using, just like in IPv4. IPv6 provides several ways to do this:\nManual configuration Autoconfiguration (Stateless Address Autoconfiguration, or SLAAC) Cryptographically Generated Addresses (CGA) DHCPv6 (sometimes called stateful autoconfiguration) Manual configuration works the same in IPv6 as in IPv4 - the user or the user's administrator configures the address by hand. While practical for very small networks, this is almost never done today due to the ease of use of the other methods and the more difficult notation in IPv6. SLAAC allows a node (a computer or other device that is not a router) to automatically generate an address. This is done using information sent from nearby IPv6 routers, as well as some other information such as a unique hardware identifier or a random number.\nCGA is a method designed to make it difficult for someone to spoof an address of another computer on a network. This is an improvement over ARP, which is used for neighbor discovery in IPv4, or ND which does the same in IPv6, and relies on the long number space in IPv6 to encode a public key. It is optional and not yet widely deployed.\nFinally, there is DHCPv6. DHCPv6 performs the same role in IPv6 as DHCP does in IPv4. It allows a central server to distribute addresses and other configuration information to a number of hosts.\nWhy DHCPv6? The most common way to number in IPv6 today is using SLAAC. DHCPv6 offers a number of advantages over SLAAC:\nIt allows central management, so administrators can know which addresses were in use at which time. This may be important for auditing, billing, and other purposes. Administrators can use similar tools for IPv4 and IPv6 network management, and they are comfortable with DHCP. It is easy for the DHCPv6 server to set up DNS and many other services, like SIP parameters, required for VoIP, on behalf of hosts. With DNS updates in DHCPv6, clients can enjoy having their human readable names, like desktop-joe.example.org rather than just an address like 2001:db8:1:21e:8cff:fe9b:7349. Entire prefixes, for example a /48 or a /64, can be delegated, instead of single addresses. Other kinds of configuration other than addresses are easy to transmit. (For example, until recently there was no way to transfer information about local DNS recursive resolvers except through DHCPv6.) Granting different parameters. Administrators may configure DHCPv6 to grant specific parameters to different groups or even single users. Access control. Administrators can configure DHCPv6 server to refuse to assign parameters to untrusted or unwanted users. How is DHCPv6 Different from DHCP in IPv4? The basic protocol flow is basically the same for both DHCP in IPv4 and for DHCPv6. However, the details are very different.\nNo baggage DHCP is based on an earlier protocol called BOOTP. This packet layout is wasteful in a lot of cases. It is also cumbersome to parse and examine with network tools. Due to backward compatibility, there are a number of restrictions that further complicate DHCP for IPv4 A lot of the options turn out to be not useful, or not as useful as they can be, but it is hard to change a protocol with such a large installed base. There are a lot of \"tweaks\" that implementations need in order to be compatible with the buggy clients. IPv6 is better. Four features of IPv6 greatly improve DHCPv6: IPv6 hosts have \"link-local addresses\". Every network interface has a unique address, that can be used to send and receive on the link only. IPv6 hosts can use this to send requests for \"real\" addresses. IPv4 hosts have to use system-specific hacks to work before they have an address. All IPv6 systems support multicasting. All DHCPv6 servers register that they want to receive DHCPv6 multicast packets. This means the network knows where to send them. In IPv4, clients broadcast their requests, and networks do not know how far to send them. DHCPv6 supports robust relaying, with up to 32 relays in each network. This greatly improves scalability and network manageability. Thanks to number of features like deprecated addresses and information refresh time option, network administrators can easily perform configuration changes and even network renumbering with users barely noticing. ISC DHCPv6 ISC was approached by Comcast in 2006 to implement DHCPv6, as they were looking for an open source DHCPv6 server to test IPv6 functionality on DOCSIS 3.0 modems. ISC released ISC DHCP 4.0 in December 2007, which included a DHCPv6 server, client, and relay. ISC continues to release new versions of ISC DHCP, which include additional functionality and other improvements to DHCPv6.\nISC is actively participating in DHCP and DHCPv6 standardization efforts within the IETF. ISC DHCP is often the first protocol implementation that offers new capabilities and is often used as a reference implementation by other vendors.\n","permalink":"/blogs/isc-dhcp-and-ipv6-the-dhcpv6-story/","tags":["DHCP","ISC DHCP","IPv4","IPv6"],"title":"ISC DHCP and IPv6 – the DHCPv6 Story"},{"categories":["BIND","DNS","DHCP","ISC DHCP"],"contents":"A new milestone in the history and evolution of the Internet has passed: On Thursday, February 3, 2011, it was announced that the Internet Assigned Numbers Authority (IANA), steward of the Internet\u0026rsquo;s reserves of unassigned IP addresses, has distributed the final blocks of IPv4 addresses to the Regional Internet Registries (RIRs). The RIRs, based in North America, Europe, Asia, South America, and Africa, will now allocate them, according to rules developed in each region, to service providers and enterprises worldwide. And then all of the IPv4 addresses will be in use.\nWhat Happened? In many ways this is a non-event. No network, application, or activity that depends on IPv4 today will stop working due to the events this week. If you aren\u0026rsquo;t trying to grow your network, directly connected to the global infrastructure, your primary interest in IPv6 is in using it for the simplest, most straightforward interconnection with carriers and content providers in the future. If you do need IPv6 to grow your network, a number of upgrade strategies are available to you. Nothing changes instantly because of this long-predicted turn in the path.\nBut it is the next step in a vast and inevitable change. Sometime in the not-so-distant future\u0026ndash; a few months\u0026ndash; the RIRs will no longer have unassigned IPv4 to distribute either. The ability to attach new devices and networks to the Internet will depend on transferring IPv4 addresses from someone else who doesn\u0026rsquo;t need them, connecting with IPv6, or using some set of \u0026ldquo;transition\u0026rdquo; or \u0026ldquo;co-existence\u0026rdquo; technologies allowing IPv4 and IPv6 connected devices to talk to each other. All of these approaches can be expected to persist, in all possible combinations, for some time to come; but ultimately, today\u0026rsquo;s Internet, in which the default underlying protocol is IPv4, will become a network built on IPv6.\nWhat does this mean for the Internet? In the long term, the deployment of IPv6 promises a vastly larger, richer Internet of more devices and more ways for them to interact. But in the short term, adaptation of the Internet\u0026ndash; operating systems, networks, applications\u0026ndash; to IPv6 has lagged the depletion of the IPv4 address pool.\nSo today, there\u0026rsquo;s a gap to be filled: we\u0026rsquo;re running out of available IPv4 addresses faster than we\u0026rsquo;re migrating away from the need for them. Even with the wide deployment of NAT, invented years ago to extend the useful address space beyond what the original specification allowed, there\u0026rsquo;s only so far we can go in aggregating vast numbers of separate devices behind small numbers of unique IP addresses, and we\u0026rsquo;re at that limit. Address policy at the RIRs has been getting ever more conservative for a decade, but we\u0026rsquo;ve gone as far as we can. Even the policy evolution that has allowed the RIRs to permit direct address transfers between members once the unallocated pool has been exhausted\u0026ndash; allowing the development of a market in address space\u0026ndash; can\u0026rsquo;t put off the need forever for a larger pool of numbers than IPv4 can provide. There\u0026rsquo;s simply no other way to connect the users, devices, and services of the future.\nThe significance of the end of the IPv4 unallocated address pool is this: if you haven\u0026rsquo;t thought about how you\u0026rsquo;re going to live, and operate any network you are responsible for, in an IPv6 future, you need to start.\nThe good news is that the tools are in hand to bridge the gap. Significant work has gone into the invention of \u0026ldquo;co-existence\u0026rdquo; technologies to allow systems, networks, and applications not yet ready to use IPv6 to interoperate with those that are. The simplest of these is usually called \u0026ldquo;dual stack,\u0026rdquo; in which the same host has both IPv4 and IPv6 addresses; but others have been developed as well, to accommodate both IPv4-only and IPv6-only hosts and networks. These are particularly important now that devices without IPv4 addresses soon may not be able to get them.\nLarge carriers, content providers, hosting providers, and others for whom access to IP addresses for new devices is mission-critical, have been working towards wide deployment of IPv6 for some time now, and their vendors\u0026ndash; including ISC\u0026ndash; have been working towards accommodating them. So there is also beginning to be a base of operational experience to draw on, for both the ways in which IPv6 works \u0026ldquo;just like IPv4\u0026rdquo; and the ways in which it is different.\nWhat does this mean for ISC and our customers? First, ISC has been looking towards the migration of the Internet infrastructure to IPv6 for a very long time now. We have contributed significant effort to the standards for accommodating IPv6, and to the policy evolution of the RIRs to promote its use. Our infrastructure software products\u0026ndash; principally BIND 9 and DHCP\u0026ndash; have been fully compatible with existing IPv6 standards for years. For example, our DHCP package implements the DHCPv6 standard, in both client and server, including IPv6 prefix delegation, which helps support far larger numbers of devices in an IPv6 Internet home or small office than has been the case for IPv4 sites. And our services, including SNS, F-Root, and Hosted@ have been configured for dual-stack operation for years as well. Our DNS root name server was one of the first to have an IPv6 address publicly available and published in the root zone for serving DNS queries over IPv6 transport, and has helped make the business case for IPv6 availability at many of the locations where we operate. And our expanding training offerings now include not only coverage of IPv6 features in our courses on BIND and DHCP, but a new dedicated IPv6 course.\nIn the last couple of years, we have also worked on standards and implementation for some of the new technologies to support IPv4 and IPv6 interoperation. We have added advanced features for IPv6 and mixed networks to BIND and DHCPv6. We have released AFTR, our package implementing the dual-stack lite protocol to allow IPv4 devices to communicate across an IPv6 carrier network, and we continue to add features as the standard and our customers\u0026rsquo; needs evolve; we are currently implementing PCP, or Port Control Protocol, as a further evolution in dual-stack lite. BIND 9.8 will support DNS64, required to enable in turn the NAT64 standard which allows IPv6 devices to communicate with IPv4 servers. ISC DHCP 4.2.1 will be compatible with 6rd, another recently-specified protocol for deploying native IPv6 connectivity to end-users, and there will be more enhancements to IPv6 capabilities in DHCP 4.3, later this year.\nOur software, support and consulting engineers are ready to draw upon our experience, and that of the wider community, to help make the future IPv6 Internet a reality for our customers and partners everywhere. We\u0026rsquo;ll help you figure out what IPv6 means to you, and how to get there with minimal cost and maximum benefit.\nToday, though, we stop in amazement to look at what the world has done with the IPv4 protocol that has been the foundation for the Internet. We\u0026rsquo;re looking forward to seeing what we can make of the IPv6 Internet of the future.\n","permalink":"/blogs/an-ending-and-an-opportunity/","tags":["BIND","DNS","DHCP","ISC DHCP","IPv4","IPv6"],"title":"An Ending and An Opportunity"},{"categories":["BIND","DNS"],"contents":"With the world wide deployment of IPv6 in parallel with IPv4, it has become apparent that a traditional connection loop is no longer good enough.\nIn fact, this is a large part of the reason why Google is white listing resolvers and Yahoo only wants to return to AAAA records to DNS queries made over IPv6. The traditional connection loop does not behave well in the presence of some network errors. It introduces excessive delays when there are good alternate addresses to use.\nThis has not been a big problem in the past, as most sites have been single homed, so there were no alternate addresses to try. But with the deployment of IPv6 along side IPv4, almost all sites will become multi-homed, with a minimum of two addresses, so now is the time to fix this problem.\nWith a traditional connection loop, each address returned from gethostbyname() or getaddrinfo() is tried in turn and the application then stalls until the connection attempt succeeds or fails. Then the next address is tried, etc. While most successful connections take less than 500 milliseconds, a failed connection attempt can take up to half a minute before we move onto the next address, adding a lot of unnecessary latency.\nThe connect call can take 30 seconds to fail and if the first address you try is broken you can end up waiting a long time until you try the next address.\nfd = -1; for (ai = ai0; ai; ai = ai-\u0026gt;ai_next) { fd = socket(ai-\u0026gt;ai_family, ai-\u0026gt;ai_socktype, ai-\u0026gt;ai_protocol); if (fd \u0026lt; 0) continue; if (connect(fd, ai-\u0026gt;ai_addr, ai-\u0026gt;ai_addrlen) \u0026lt; 0) { close(fd); fd = -1; continue; } /* success */ break; } You see this sort of connection loop in most text books on socket programming and in the man page for getaddrinfo().\nThe first observation to be made is that we can make these connection attempts in parallel, which works but leads to lots of unnecessary connections being made if we start them all at once. Most of the time, the first connection attempt will succeed, so we should give it an opportunity to do so before making a second attempt.\nThe sample code attached takes the output of getaddrinfo() and tries each address in turn, waiting a decreasing amount of time between subsequent connection attempts. When one of the connection attempts completes, it will abort the others. The initial timeout is 500 milliseconds which is enough time to connect to a European server from Sydney, Australia using terrestrial paths.\nCode samples for poll(), select() and pthread based C are included below.\nThread based sample code /* * Copyright (C) 2011 Internet Systems Consortium, Inc. (\u0026#34;ISC\u0026#34;) * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED \u0026#34;AS IS\u0026#34; AND ISC DISCLAIMS ALL WARRANTIES WITH * REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY * AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, * INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE * OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR * PERFORMANCE OF THIS SOFTWARE. */ /* * Initial timeout between connection attempts. The smaller this is the * more embryonic connection attempts that will be made. On each subsequent * connection attempt the timeout will be halved leading to all connection * attempts being initiated within 2 * TIMEOUT ms. * * 100 ms will let most intra continent connections succeed without a *\tembryonic connection. * 500 ms well let most intercontinental connections succeed without a * embryonic connection. */ #define TIMEOUT 500\t/* 500 ms */ #define TESTING 1 #include \u0026lt;sys/types.h\u0026gt; #include \u0026lt;sys/socket.h\u0026gt; #include \u0026lt;sys/time.h\u0026gt; #include \u0026lt;netinet/in.h\u0026gt; #include \u0026lt;assert.h\u0026gt; #include \u0026lt;errno.h\u0026gt; #include \u0026lt;netdb.h\u0026gt; #include \u0026lt;pthread.h\u0026gt; #include \u0026lt;stdarg.h\u0026gt; #include \u0026lt;stdbool.h\u0026gt; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026lt;unistd.h\u0026gt; struct common { pthread_mutex_t *mutex; pthread_cond_t *cond; int *count; int *fd; }; struct state { struct addrinfo *addrinfo; struct common *common; }; static void fatal(char *format, ...) { va_list ap; va_start(ap, format); vfprintf(stderr, format, ap); va_end(ap); abort(); } static void connect_to_address_cleanup(void *arg) { int *fd = arg; if (*fd != -1) close(*fd); } static void * connect_to_address(void *arg) { struct state *state = arg; struct addrinfo *addrinfo = state-\u0026gt;addrinfo; struct common *common = state-\u0026gt;common; int fd = -1, n; /* Ensure that fd is closed if we are canceled. */ pthread_cleanup_push(connect_to_address_cleanup, \u0026amp;fd); fd = socket(addrinfo-\u0026gt;ai_family, addrinfo-\u0026gt;ai_socktype, addrinfo-\u0026gt;ai_protocol); if (fd \u0026lt; 0) { /* * If AI_ADDRCONFIG is not supported we will get EAFNOSUPPORT * returned. Silently ignore it. */ if (errno != EAFNOSUPPORT) perror(\u0026#34;socket\u0026#34;); } else if (connect(fd, addrinfo-\u0026gt;ai_addr, addrinfo-\u0026gt;ai_addrlen) == -1) { perror(\u0026#34;connect\u0026#34;); close(fd); fd = -1; } /* If we get here we want the rest of the thread to complete. */ n = pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL); if (n != 0) fatal(\u0026#34;pthread_setcancelstate: %s\u0026#34;, strerror(n)); n = pthread_mutex_lock(common-\u0026gt;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_lock: %s\u0026#34;, strerror(n)); if (fd != -1 \u0026amp;\u0026amp; *common-\u0026gt;fd == -1) { /* Record success. */ *common-\u0026gt;fd = fd; fd = -1; } *common-\u0026gt;count -= 1; n = pthread_mutex_unlock(common-\u0026gt;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_unlock: %s\u0026#34;, strerror(n)); pthread_cleanup_pop(1); /* Signal that we are done. */ n = pthread_cond_signal(common-\u0026gt;cond); if (n != 0) fatal(\u0026#34;pthread_cond_signal: %s\u0026#34;, strerror(n)); pthread_exit(NULL); } int connect_to_host(struct addrinfo *res0) { struct addrinfo *res; int fd = -1, n, i, j, count; pthread_t *threads; struct state *state; int timeout = TIMEOUT * 1000; struct timespec timespec; pthread_cond_t cond; pthread_mutex_t mutex; struct common common; /* * Work out how many possible descriptors we could use. */ for (res = res0, count = 0; res; res = res-\u0026gt;ai_next) count++; threads = calloc(count, sizeof(*threads)); state = calloc(count, sizeof(*state)); if (threads == NULL || state == NULL) { perror(\u0026#34;calloc\u0026#34;); goto cleanup; } n = pthread_mutex_init(\u0026amp;mutex, NULL); if (n != 0) { fprintf(stderr, \u0026#34;pthread_mutex_init: %s\u0026#34;, strerror(n)); goto cleanup; } n = pthread_cond_init(\u0026amp;cond, NULL); if (n != 0) { fprintf(stderr, \u0026#34;pthread_cond_init: %s\u0026#34;, strerror(n)); goto cleanup_mutex; } common.fd = \u0026amp;fd; common.cond = \u0026amp;cond; common.count = \u0026amp;count; common.mutex = \u0026amp;mutex; /* * fd and count are protected by mutex. */ for (res = res0, i = 0, count = 0; res; res = res-\u0026gt;ai_next) { bool done; state[i].common = \u0026amp;common; state[i].addrinfo = res; n = pthread_create(\u0026amp;threads[i], NULL, connect_to_address, \u0026amp;state[i]); if (n != 0) fprintf(stderr, \u0026#34;pthread_create: %s\u0026#34;, strerror(n)); else { i++; n = pthread_mutex_lock(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_lock: %s\u0026#34;, strerror(n)); count++; n = pthread_mutex_unlock(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_unlock: %s\u0026#34;, strerror(n)); } done = false; do { n = pthread_mutex_lock(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_lock: %s\u0026#34;, strerror(n)); /* No outstanding threads? */ if (count == 0) { /* Are we done? */ if (fd != -1 || res-\u0026gt;ai_next == NULL) done = true; n = pthread_mutex_unlock(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_unlock: %s\u0026#34;, strerror(n)); break; } if (res-\u0026gt;ai_next != NULL) { struct timeval tv; n = gettimeofday(\u0026amp;tv, NULL); if (n != 0) fatal(\u0026#34;gettimeofday: %s\\n\u0026#34;, strerror(errno)); timespec.tv_sec = tv.tv_sec; timespec.tv_nsec = (tv.tv_usec + timeout) * 1000; while (timespec.tv_nsec \u0026gt;= 1000000000) { timespec.tv_nsec -= 1000000000; timespec.tv_sec += 1; } n = pthread_cond_timedwait(\u0026amp;cond, \u0026amp;mutex, \u0026amp;timespec); } else n = pthread_cond_wait(\u0026amp;cond, \u0026amp;mutex); if (n == ETIMEDOUT) timeout \u0026gt;\u0026gt;= 1; else if (n != 0) fatal(\u0026#34;pthread_cond_%swait: %s\\n\u0026#34;, res-\u0026gt;ai_next != NULL ? \u0026#34;timed\u0026#34; : \u0026#34;\u0026#34;, strerror(n)); if (fd != -1 || (count == 0 \u0026amp;\u0026amp; res-\u0026gt;ai_next == NULL)) done = true; n = pthread_mutex_unlock(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_unlock: %s\u0026#34;, strerror(n)); } while (!done \u0026amp;\u0026amp; res-\u0026gt;ai_next == NULL); if (done) break; } /* Shutdown and tidy up all the threads we started. */ for (j = 0; j \u0026lt; i; j++) { n = pthread_cancel(threads[j]); if (n != 0 \u0026amp;\u0026amp; n != ESRCH) fatal(\u0026#34;pthread_cancel: %s\\n\u0026#34;, strerror(n)); n = pthread_join(threads[j], NULL); if (n != 0) fatal(\u0026#34;pthread_join: %s\\n\u0026#34;, strerror(n)); } /* Cleanup the resources we used. */ n = pthread_cond_destroy(\u0026amp;cond); if (n != 0) fatal(\u0026#34;pthread_cond_destroy: %s\u0026#34;, strerror(n)); cleanup_mutex: n = pthread_mutex_destroy(\u0026amp;mutex); if (n != 0) fatal(\u0026#34;pthread_mutex_destroy: %s\u0026#34;, strerror(n)); cleanup: /* Free everything. */ if (threads) free(threads); if (state) free(state); return (fd); } #if TESTING int main(int argc, char **argv) { int fd, n; struct timeval then, now; struct addrinfo hints, *res0; const char *hostname, *servname; hostname = \u0026#34;localhost\u0026#34;; if (argv[1]) hostname = argv[1]; servname = \u0026#34;http\u0026#34;; /* * Not all getaddrinfo() implementations support AI_ADDRCONFIG * even if it is defined. Retry without it on EAI_BADFLAGS. */ memset(\u0026amp;hints, 0, sizeof(hints)); hints.ai_family = PF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; #ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; #endif #ifdef AI_ADDRCONFIG again: #endif n = getaddrinfo(hostname, servname, \u0026amp;hints, \u0026amp;res0); if (n != 0) { #ifdef AI_ADDRCONFIG if (n == EAI_BADFLAGS \u0026amp;\u0026amp; hints.ai_flags \u0026amp; AI_ADDRCONFIG) { hints.ai_flags \u0026amp;= ~AI_ADDRCONFIG; goto again; } #endif fprintf(stderr, \u0026#34;getaddrinfo: %s\\n\u0026#34;, gai_strerror(n)); exit(1); } gettimeofday(\u0026amp;then, NULL); fd = connect_to_host(res0); gettimeofday(\u0026amp;now, NULL); freeaddrinfo(res0); now.tv_sec -= then.tv_sec; now.tv_usec -= then.tv_usec; while (now.tv_sec \u0026gt; 0) { now.tv_usec += 1000000; now.tv_sec -= 1; } fprintf(stderr, \u0026#34;connect_to_host(%s) -\u0026gt; %d in %d ms\\n\u0026#34;, hostname, fd, (int)now.tv_usec/1000); close(fd); exit(0); } #endif Poll based sample code /* * Copyright (C) 2011 Internet Systems Consortium, Inc. (\u0026#34;ISC\u0026#34;) * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED \u0026#34;AS IS\u0026#34; AND ISC DISCLAIMS ALL WARRANTIES WITH * REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY * AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, * INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE * OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR * PERFORMANCE OF THIS SOFTWARE. */ /* * Initial timeout between connection attempts. The smaller this is the * more embryonic connection attempts that will be made. On each subsequent * connection attempt the timeout will be halved leading to all connection * attempts being initiated within 2 * TIMEOUT ms. * * 100 ms will let most intra continent connections succeed without a *\tembryonic connection. * 500 ms well let most intercontinental connections succeed without a * embryonic connection. */ #define TIMEOUT 500\t/* 500 ms */ #define TESTING 1 #include \u0026lt;sys/types.h\u0026gt; #include \u0026lt;sys/socket.h\u0026gt; #include \u0026lt;sys/poll.h\u0026gt; #include \u0026lt;sys/time.h\u0026gt; #include \u0026lt;netinet/in.h\u0026gt; #include \u0026lt;assert.h\u0026gt; #include \u0026lt;errno.h\u0026gt; #include \u0026lt;fcntl.h\u0026gt; #include \u0026lt;netdb.h\u0026gt; #include \u0026lt;stdarg.h\u0026gt; #include \u0026lt;stdbool.h\u0026gt; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026lt;unistd.h\u0026gt; int connect_to_host(struct addrinfo *res0) { struct addrinfo *res; int fd = -1, n, i, j, flags, count; struct pollfd *fds; int timeout = TIMEOUT; /* * Work out how many possible descriptors we could use. */ for (res = res0, count = 0; res; res = res-\u0026gt;ai_next) count++; fds = calloc(count, sizeof(*fds)); if (fds == NULL) { perror(\u0026#34;calloc\u0026#34;); goto cleanup; } for (res = res0, i = 0, count = 0; res; res = res-\u0026gt;ai_next) { fd = socket(res-\u0026gt;ai_family, res-\u0026gt;ai_socktype, res-\u0026gt;ai_protocol); if (fd == -1) { /* * If AI_ADDRCONFIG is not supported we will get * EAFNOSUPPORT returned. Behave as if the address * was not there. */ if (errno != EAFNOSUPPORT) perror(\u0026#34;socket\u0026#34;); else if (res-\u0026gt;ai_next != NULL) continue; } else if ((flags = fcntl(fd, F_GETFL)) == -1) { perror(\u0026#34;fcntl\u0026#34;); close(fd); } else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK) == -1) { perror(\u0026#34;fcntl\u0026#34;); close(fd); } else if (connect(fd, res-\u0026gt;ai_addr, res-\u0026gt;ai_addrlen) == -1) { if (errno != EINPROGRESS) { perror(\u0026#34;connect\u0026#34;); close(fd); } else { /* * Record the information for this descriptor. */ fds[i].fd = fd; fds[i].events = POLLERR | POLLHUP | POLLIN | POLLOUT; count++; i++; } } else { /* * We connected without blocking. */ goto done; } if (count == 0) continue; do { if (res-\u0026gt;ai_next == NULL) timeout = -1; n = poll(fds, i, timeout); if (n == 0) { timeout \u0026gt;\u0026gt;= 1; break; } if (n \u0026lt; 0) { if (errno == EAGAIN || errno == EINTR) continue; perror(\u0026#34;poll\u0026#34;); fd = -1; goto done; } for (j = 0; j \u0026lt; i; j++) { if (fds[j].fd == -1 || fds[j].events == 0 || fds[j].revents == 0) continue; fd = fds[j].fd; if (fds[j].revents \u0026amp; POLLHUP) { close(fd); fds[j].fd = -1; fds[j].events = 0; count--; continue; } /* Connect succeeded. */ goto done; } } while (timeout == -1 \u0026amp;\u0026amp; count != 0); } /* We failed to connect. */ fd = -1; done: /* Close all other descriptors we have created. */ for (j = 0; j \u0026lt; i; j++) if (fds[j].fd != fd \u0026amp;\u0026amp; fds[j].fd != -1) { close(fds[j].fd); } if (fd != -1) { /* Restore default blocking behaviour. */ if ((flags = fcntl(fd, F_GETFL)) != -1) { flags \u0026amp;= ~O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) == -1) perror(\u0026#34;fcntl\u0026#34;); } else perror(\u0026#34;fcntl\u0026#34;); } cleanup: /* Free everything. */ if (fds != NULL) free(fds); return (fd); } #if TESTING int main(int argc, char **argv) { int fd, n; struct timeval then, now; struct addrinfo hints, *res0; const char *hostname, *servname; hostname = \u0026#34;localhost\u0026#34;; if (argv[1]) hostname = argv[1]; servname = \u0026#34;http\u0026#34;; /* * Not all getaddrinfo() implementations support AI_ADDRCONFIG * even if it is defined. Retry without it on EAI_BADFLAGS. */ memset(\u0026amp;hints, 0, sizeof(hints)); hints.ai_family = PF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; #ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; #endif #ifdef AI_ADDRCONFIG again: #endif n = getaddrinfo(hostname, servname, \u0026amp;hints, \u0026amp;res0); if (n != 0) { #ifdef AI_ADDRCONFIG if (n == EAI_BADFLAGS \u0026amp;\u0026amp; hints.ai_flags \u0026amp; AI_ADDRCONFIG) { hints.ai_flags \u0026amp;= ~AI_ADDRCONFIG; goto again; } #endif fprintf(stderr, \u0026#34;getaddrinfo: %s\\n\u0026#34;, gai_strerror(n)); exit(1); } gettimeofday(\u0026amp;then, NULL); fd = connect_to_host(res0); gettimeofday(\u0026amp;now, NULL); freeaddrinfo(res0); now.tv_sec -= then.tv_sec; now.tv_usec -= then.tv_usec; while (now.tv_sec \u0026gt; 0) { now.tv_usec += 1000000; now.tv_sec -= 1; } fprintf(stderr, \u0026#34;connect_to_host(%s) -\u0026gt; %d in %d ms\\n\u0026#34;, hostname, fd, (int)now.tv_usec/1000); close(fd); exit(0); } #endif Select based sample code /* * Copyright (C) 2011 Internet Systems Consortium, Inc. (\u0026#34;ISC\u0026#34;) * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED \u0026#34;AS IS\u0026#34; AND ISC DISCLAIMS ALL WARRANTIES WITH * REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY * AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, * INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM * LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE * OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR * PERFORMANCE OF THIS SOFTWARE. */ /* * Initial timeout between connection attempts. The smaller this is the * more embryonic connection attempts that will be made. On each subsequent * connection attempt the timeout will be halved leading to all connection * attempts being initiated within 2 * TIMEOUT ms. * * 100 ms will let most intra continent connections succeed without a *\tembryonic connection. * 500 ms well let most intercontinental connections succeed without a * embryonic connection. */ #define TIMEOUT 500\t/* 500 ms */ #define TESTING 1 #if TIMEOUT \u0026gt; 999 #define TIMEOUT 999\t/* select() doesn\u0026#39;t like tv_usec \u0026gt; 999999. */ #endif #include \u0026lt;sys/types.h\u0026gt; #include \u0026lt;sys/socket.h\u0026gt; #include \u0026lt;sys/select.h\u0026gt; #include \u0026lt;sys/time.h\u0026gt; #include \u0026lt;netinet/in.h\u0026gt; #include \u0026lt;assert.h\u0026gt; #include \u0026lt;errno.h\u0026gt; #include \u0026lt;fcntl.h\u0026gt; #include \u0026lt;netdb.h\u0026gt; #include \u0026lt;stdarg.h\u0026gt; #include \u0026lt;stdio.h\u0026gt; #include \u0026lt;stdlib.h\u0026gt; #include \u0026lt;string.h\u0026gt; #include \u0026lt;unistd.h\u0026gt; int connect_to_host(struct addrinfo *res0) { struct addrinfo *res; int fd = -1, n, i, j, flags, count, max = -1, *fds; struct timeval *timeout, timeout0 = { 0, TIMEOUT * 1000}; fd_set fdset, wrset; /* * Work out how many possible descriptors we could use. */ for (res = res0, count = 0; res; res = res-\u0026gt;ai_next) count++; fds = calloc(count, sizeof(*fds)); if (fds == NULL) { perror(\u0026#34;calloc\u0026#34;); goto cleanup; } FD_ZERO(\u0026amp;fdset); for (res = res0, i = 0, count = 0; res; res = res-\u0026gt;ai_next) { fd = socket(res-\u0026gt;ai_family, res-\u0026gt;ai_socktype, res-\u0026gt;ai_protocol); if (fd == -1) { /* * If AI_ADDRCONFIG is not supported we will get * EAFNOSUPPORT returned. Behave as if the address * was not there. */ if (errno != EAFNOSUPPORT) perror(\u0026#34;socket\u0026#34;); else if (res-\u0026gt;ai_next != NULL) continue; } else if (fd \u0026gt;= FD_SETSIZE) { close(fd); } else if ((flags = fcntl(fd, F_GETFL)) == -1) { perror(\u0026#34;fcntl\u0026#34;); close(fd); } else if (fcntl(fd, F_SETFL, flags | O_NONBLOCK) == -1) { perror(\u0026#34;fcntl\u0026#34;); close(fd); } else if (connect(fd, res-\u0026gt;ai_addr, res-\u0026gt;ai_addrlen) == -1) { if (errno != EINPROGRESS) { perror(\u0026#34;connect\u0026#34;); close(fd); } else { /* * Record the information for this descriptor. */ fds[i] = fd; FD_SET(fd, \u0026amp;fdset); if (max == -1 || fd \u0026gt; max) max = fd; count++; i++; } } else { /* * We connected without blocking. */ goto done; } if (count == 0) continue; assert(max != -1); do { if (res-\u0026gt;ai_next != NULL) timeout = \u0026amp;timeout0; else timeout = NULL; /* The write bit is set on both success and failure. */ wrset = fdset; n = select(max + 1, NULL, \u0026amp;wrset, NULL, timeout); if (n == 0) { timeout0.tv_usec \u0026gt;\u0026gt;= 1; break; } if (n \u0026lt; 0) { if (errno == EAGAIN || errno == EINTR) continue; perror(\u0026#34;select\u0026#34;); fd = -1; goto done; } for (fd = 0; fd \u0026lt;= max; fd++) { if (FD_ISSET(fd, \u0026amp;wrset)) { socklen_t len; int err; for (j = 0; j \u0026lt; i; j++) if (fds[j] == fd) break; assert(j \u0026lt; i); /* * Test to see if the connect * succeeded. */ len = sizeof(err); n = getsockopt(fd, SOL_SOCKET, SO_ERROR, \u0026amp;err, \u0026amp;len); if (n != 0 || err != 0) { close(fd); FD_CLR(fd, \u0026amp;fdset); fds[j] = -1; count--; continue; } /* Connect succeeded. */ goto done; } } } while (timeout == NULL \u0026amp;\u0026amp; count != 0); } /* We failed to connect. */ fd = -1; done: /* Close all other descriptors we have created. */ for (j = 0; j \u0026lt; i; j++) if (fds[j] != fd \u0026amp;\u0026amp; fds[j] != -1) { close(fds[j]); } if (fd != -1) { /* Restore default blocking behaviour. */ if ((flags = fcntl(fd, F_GETFL)) != -1) { flags \u0026amp;= ~O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) == -1) perror(\u0026#34;fcntl\u0026#34;); } else perror(\u0026#34;fcntl\u0026#34;); } cleanup: /* Free everything. */ if (fds) free(fds); return (fd); } #if TESTING int main(int argc, char **argv) { int fd, n; struct timeval then, now; struct addrinfo hints, *res0; const char *hostname, *servname; hostname = \u0026#34;localhost\u0026#34;; if (argv[1]) hostname = argv[1]; servname = \u0026#34;http\u0026#34;; /* * Not all getaddrinfo() implementations support AI_ADDRCONFIG * even if it is defined. Retry without it on EAI_BADFLAGS. */ memset(\u0026amp;hints, 0, sizeof(hints)); hints.ai_family = PF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; #ifdef AI_ADDRCONFIG hints.ai_flags = AI_ADDRCONFIG; #endif #ifdef AI_ADDRCONFIG again: #endif n = getaddrinfo(hostname, servname, \u0026amp;hints, \u0026amp;res0); if (n != 0) { #ifdef AI_ADDRCONFIG if (n == EAI_BADFLAGS \u0026amp;\u0026amp; hints.ai_flags \u0026amp; AI_ADDRCONFIG) { hints.ai_flags \u0026amp;= ~AI_ADDRCONFIG; goto again; } #endif fprintf(stderr, \u0026#34;getaddrinfo: %s\\n\u0026#34;, gai_strerror(n)); exit(1); } gettimeofday(\u0026amp;then, NULL); fd = connect_to_host(res0); gettimeofday(\u0026amp;now, NULL); freeaddrinfo(res0); now.tv_sec -= then.tv_sec; now.tv_usec -= then.tv_usec; while (now.tv_sec \u0026gt; 0) { now.tv_usec += 1000000; now.tv_sec -= 1; } fprintf(stderr, \u0026#34;connect_to_host(%s) -\u0026gt; %d in %d ms\\n\u0026#34;, hostname, fd, (int)now.tv_usec/1000); close(fd); exit(0); } #endif ","permalink":"/blogs/2011-01-19-multi-homed-server-over-tcp/","tags":["BIND"],"title":"How to Connect to a Multi-Homed Server Over TCP"},{"categories":["F-Root"],"contents":"ISC uses an unusual routing configuration for the F-Root name server. While the configuration is relatively easy to understand, it’s hard to deduce by looking at the routing tables. We’ll explain it here!\nThe network 192.5.4.0/23 is used for F-Root. The reasons for using this block are historical and unimportant, but the fact that it is a /23 is very important. Looking in the global routing table, you’ll find 192.5.4.0/23 routed worldwide; ISC has obtained multiple transit providers for this network to provide excellent access to F-Root.\nLooking at 192.5.4.0/23, F-Root appears to be a single global node for the entire world. In fact, it is not a single node. Internal to our network, we anycast the F-Root “global nodes” that handle the traffic destined to this prefix.\nISC went a step further though, exercising an ability created by the fact that the prefix is a /23. In over 50 locations worldwide ISC has enabled “local nodes”. Typically an F-Root instance is at a local Internet exchange, although sometimes at a local PTT or university. These local nodes are designed to serve the local community, providing lower-latency, high-reliability access to local users all around the world.\nISC could announce the /23 at these locations in a typical anycast configuration. ISC opted for a more interesting configuration though, announcing at these locations 192.5.5.0/24 with the BGP Well Known Community NO_EXPORT set. The intention is that those local to these local nodes will receive the more specific /24 and route their traffic to the local node. Why set NO_EXPORT? It would be bad to attract traffic from halfway around the world to a local node! Some of these nodes are in far-flung parts of the world, and even behind slow satellite links. They are there to serve the local community, not the wider Internet (served by the Global Nodes).\nOne of the most common questions received is from ISP’s peering with a local node. They want to drop the NO_EXPORT and send the prefixes on to their customers. This is absolutely not necessary. The ISP has 192.5.4.0/23 from the global nodes, and is sending that to their customers. Customers will send traffic for F-root into the ISP network automatically. Having the 192.5.5.0/24 route inside the ISP network will then “short-circuit” the traffic to the best local node.\nWhat about IPv6? The exact same configuration applies, only with different networks. Replace 192.5.4.0/23 with 2001:500:2E::/47, and replace 192.5.5.0/24 with 2001:500:2F::/48 and you have the IPv6 configuration! ISC has been working to get all of the local nodes IPv6 enabled, and has implemented this on over two-thirds of them already.\nIf you want to peer with an ISC local node, visit our Network Peering page.\n","permalink":"/blogs/f-root-routing-how-does-it-work/","tags":["F-Root","routing","anycast"],"title":"F-Root Routing: How Does It Work?"},{"categories":["BIND","DNS"],"contents":"DNSSEC is coming. Is your organization ready? The DNS community is buzzing with activity around the implementation of the DNS Security Extension, DNSSEC. In simple terms, DNSSEC provides a \u0026ldquo;chain of trust\u0026rdquo; within the DNS hierarchy and the authentication of DNS responses. Once deployed across the DNS, DNSSEC will render the infamous man-in-the-middle attack a thing of the past.\nBut DNSSEC adds many new twists to running a DNS service, both for authoritative and recursive customer-facing servers. At ISC, we are frequently asked what \u0026ldquo;being ready for DNSSEC\u0026rdquo; really means. Here are some things that you can expect when deploying DNSSEC.\nDNSSEC Changes, In General Regardless of the type of server (authoritative or recursive), many changes to the operational environment will come about with the adoption of DNSSEC.\nLarger UDP packets DNSSEC adds additional data to each answer that a server returns. These larger answers can sometimes exceed the packet sizes expected by some software and hardware. These larger packets may be returned even when the client is not using DNSSEC.\nIncreased TCP Usage Larger UDP packets may trigger a more frequent fall-back to TCP. TCP is less efficient than UDP for DNS because it causes more network packets to be transferred for a single query and places additional memory requirements on both client and server.\nIncreased Memory Requirements A DNSSEC signed zone is anywhere from 4 to 14 times the size of the original zone, when using RSA keys and signatures. This increase is dependent on the key size. Memory requirements will also increase when a new key is being \u0026ldquo;rolled.\u0026rdquo; This has a particular heavy impact on the DNS infrastructure of larger TLD Registries and others who service a very large zone.\nFirewalls, Load Balancing, and DNS Firewalls can cause no end of problems for the larger responses DNSSEC generates. Load balancer hardware may block larger packets or packets containing new DNSSEC record types. Both of these network components may decide an incoming packet is not acceptable and block it or change it in ways which break the DNSSEC protocol.\nAuthoritative Servers This list is for people who run servers that serve DNS zones and who wish to sign them.\nAdditional Procedures When Changing Records DNS used to be simpler. Zones only changed when new records were added or removed. This is no longer the case with DNSSEC. A signed zone is quite simple in concept: keys are added to the zone and used to generate signatures on the records in the zone. The complexities are in the details.\nA DNSSEC zone has an imposed record order. This order is a necessary component of the means by which DNSSEC signals that a name does not exist. This ordering is handled automatically by the signing tools. However, if a new record is added to a signed zone, and the zone was not re-signed to add the signatures and this ordering, that new record would not be accepted by a validating client or resolver.\nSignature Expiration Typically, signatures are valid for a short time, such as one month. These signatures need to be refreshed by re-signing the records (or the entire zone) periodically. If a signature is allowed to expire, clients and resolvers cannot validate that data and will treat it as a failure. It is critical that zone signatures be maintained.\nKey Management Just like maintaining keys to an office building or your house, keys are critical to the security of DNSSEC. Without sufficiently strong keys and policies to prevent them from being compromised, DNSSEC adds no security.\nThe length of a key is one measure of its strength. Choosing a too-long key will cause more work on each resolver and slow the signing process. It will also generate larger signatures, increasing load in other areas. Using a too-short key will decrease security. Because DNSSEC uses two keys (one which can change much more rapidly and signs the majority of the zone data) a smaller, 1024-bit key may be sufficient for the \u0026ldquo;zone signing key.\u0026rdquo; A larger value may be desirable for the \u0026ldquo;key signing key,\u0026rdquo; perhaps 2048-bit. Please choose key lengths with caution and understand the implications carefully.\nKey compromise might occur when an employee leaves a company and they choose to take a copy of the private parts of the DNSSEC keys with them, or through an attacker gaining access to a server, or because you generated weak keys and attackers were able to guess them.\nCareful Selection of TTL Each record in DNS has a \u0026ldquo;time to live\u0026rdquo; (TTL) value associated with it. This value tells a cache how long it may retain its copy of the given record. In general cases, the TTL on one record has no effect on other records in the zone. DNSSEC still uses a TTL, but there are subtle changes in how the TTLs of various types of records interact. In some cases, the TTL on the DNSKEY record may decrease the effective TTL on other records in the cache, for instance.\nKeys and their signatures are one of the largest query types a server will generate. It is desirable to keep a large TTL value on keys, and indeed on many types of records. A larger TTL can cause problems when performing certain DNSSEC procedures such as rolling a new key. It may also increase recovery time when something bad happens, such as letting signatures expire.\nBecause each zone is different and its usage patterns vary, we do not suggest a single one-size-fits-all TTL value. However, we do not suggest longer than a week nor shorter than an hour for most TTL values which are not expected to change rapidly. A shorter TTL assists with agility for key rollover but puts more strain on the servers and network.\nEmergency Procedures To effectively deal with any unforeseen disasters, emergency procedures must be developed to re-generate keys and re-sign the zone in a worst-case scenario. For large zone files, this can further impact the requirements for memory, bandwidth, and performance.\nRecursive Servers This list is for people that run servers which serve customers, or other recursive clients.\nNew Failure Types Queries that used to succeed may suddenly fail in new and creative ways. Previously the most common reason for a query not working was a simple network reachability problem; DNSSEC increases the variteies of networking failure that can occur. Queries can also fail for other reasons such as signature failures caused by poorly maintained DNSSEC signed zones.\nNXDOMAIN Rewriting Some organizations attempt to monetize failed DNS lookups, or attempt to be helpful in some way by providing an automatic search for possible terms when a user types an invalid address in a browser. This will break DNSSEC for the clients of this resolver if these clients are also performing DNSSEC validation. Allowing customers to opt-in or opt-out of any redirection service is required for end-to-end DNSSEC validation.\nWhat to ask your vendors Firewall, load balancers, and other middleware boxes The first and most important question to ask them is: Will this software or hardware interfere with DNSSEC? If they hedge or do not know, there may be danger ahead. Are UDP packets larger than 512 bytes handled correctly? Will the new record types (DNSKEY, RRSIG, NSEC, NSEC3, etc) be handled properly? What changes will be made to a packet by this software or hardware? If the records are changed or filtered in any way, chances are things will break. DNS service providers If you outsource any part of your DNS infrastructure, you may want to ask some questions about their DNSSEC plans.\nHow are keys protected? How well do they understand DNSSEC? Who can you call when things are broken, and is it likely to be handled immediately or will it require waiting for \"the DNSSEC expert\" to wake up? Geo-DNS and other global methods Many companies use some form of load balancing based on the perceived location of the client. This generally involves rewriting the IP addresses or names returned. These services can still be used with DNSSEC by splitting the records out into their own zone and delegating that zone to the load balancing system.\nEach record variant could also be signed, but this may be much harder to do in practice due to lack of production and experimental knowledge in this area. For now, using DNSSEC with any form of a server which generates answers on the fly or based on client location should be avoided.\nClosing DNSSEC adds many new complications to DNS. However, with careful planning, its usefulness will outweigh them.\nIf you need help ISC offers expert support and consulting services to help migrate into DNSSEC. ISC has training for all things DNS, and a class specifically for DNSSEC. If outsourcing service of zones is desired, ISC's SNS is a commercial-grade DNSSEC DNS service. ISC is the author of BIND 9, which as of 9.7 will perform automatic zone maintenance and resigning. BIND 9.7 also supports automatic maintenance of trust anchors for early adopters wishing to gain experience with a validating resolver. If maintaining a pile of trust anchors is undesirable, ISC's DLV service can assist. ","permalink":"/blogs/dnssec-readiness/","tags":["BIND","DNS","DNSSEC"],"title":"DNSSEC Readiness"},{"categories":["BIND","DNS"],"contents":"The press seems to love stories of doom and gloom. And for almost as long as the Internet has been around, there have been dire predictions of some resource exhaustion, disaster, or security flaw that will destroy the Internet. And who is the villain in this week\u0026rsquo;s piece? DNSSEC and the signing of all the root servers.\nWhile I love a good story as much as the next person, it seems time to actually throw a few facts on the fire.\nWhat is DNSSEC and what is \u0026ldquo;signing\u0026rdquo;? The Domain Name System Security Extensions (DNSSEC) is a way of ensuring that when your name server queries the authoritative servers for example.com, you have a high degree of certainty that the answers you get back actually came from the example.com servers and haven\u0026rsquo;t been altered in transit.\nBased on the trust anchors that you configure for your name server, when your name server validates a DNS response, it will go link by link in the chain. Each link \u0026ldquo;signs\u0026rdquo; its piece by using public key cryptography to create a digital signature. Your resolver can use that signature to validate that the server you think should sign a link has actually signed it and that what you receive was correctly signed.\nFor the DNS label \u0026ldquo;host.example.com.\u0026rdquo;, you would go to the root and get the signature that will let you validate \u0026ldquo;com\u0026rdquo;. From \u0026ldquo;com\u0026rdquo;, you would get the signature that will let you validate \u0026ldquo;example.com\u0026rdquo;, and so on. In order for this to work, the root servers sign all of their RRsets, including the RRset for \u0026ldquo;com\u0026rdquo;. The \u0026ldquo;com\u0026rdquo; servers would sign all their RRsets, including the ones for \u0026ldquo;example.com\u0026rdquo;.\nWhat really happened on May 5? The root server operators didn\u0026rsquo;t want to just turn on DNSSEC, start signing the root zone, and hope nothing broke. They came up with a detailed, phased plan for deployment. The first stage was to hand out signature records that were not validatable but would test if handing out DNSSEC signatures would break anything.\nOn 25 Jan 2010, the first of the 13 root servers starting handing out a Deliberately Unvalidatable Root Zone (DURZ). This was done in phases over several months so that if something did break, not all root servers would be serving the new data and name servers that couldn\u0026rsquo;t accept the DURZ would still have access to root servers.\nWhat happened on 5 May 2010 was that the last of the 13 root servers started serving the DURZ. And if you\u0026rsquo;re reading this, I think you can assume that the Internet is still working.\nIt is important to keep in mind that while the roots are \u0026ldquo;signed\u0026rdquo;, they are still not serving a validatable root zone (i.e. you can\u0026rsquo;t yet use the roots as a trust anchor). The current schedule is that a real signed root zone will start being served July of 2010. This will make the roots usable as a trust anchor.\nWhile this will be a real milestone, the Top Level Domains (TLDs), such as .NET, .COM, etc. will also all need to be signed before you can have just the root as a trust anchor. Until then, additional trust anchors, such as dlv.isc.org, will still need to be used.\nWhat\u0026rsquo;s the real problem? The underlying issue is DNS response size. \u0026ldquo;Conventional wisdom\u0026rdquo; for years for configuring firewalls, proxies, and load balancers was that UDP packets were 512 bytes or smaller and that DNS only used TCP for zone transfers, not queries. Sadly, both of these are wrong. For quite a while, DNS responses did fit in 512 byte packets. However, as the Internet grew and DNS was used for more and more things not originally envisioned by its designers, packets did get bigger.\nDNS did have a back up plan for larger packets; the truncate bit (TC). If a nameserver had a response where all the required records in an RRSset would not fit into a 512 byte packet, the nameserver would send back an incomplete response with the TC bit set. That was supposed to tell the querying server that the response was truncated and that it should retry the query using TCP, which doesn\u0026rsquo;t have the 512 byte restriction. Too bad various middleware boxes were configured to not allow DNS over TCP\u0026hellip;\nThe IETF developed the Extension Mechanisms for DNS EDNS0 to allow querying nameservers that could accept larger than 512 byte UDP packets to tell the authoritative nameserver how big a packet they could accept. And life should have been good. Except for folks that blocked larger than 512 byte packets\u0026hellip; And all sorts of intervening network boxes and middleware boxes did all sorts of broken things with fragmented UDP packets, broke EDNS0, filtered DNS queries over TCP, etc.\nThe big problem this first phase of the signed root deployment is trying to catch is letting folks find all these broken middleware boxes so that they can be fixed before a signed and validatable root zone file is deployed.\nThese broken boxes really should be fixed and the vendors have no excuse. EDNS0 \u0026amp; queries over TCP have been around for more than ten years. IPv4 fragmentation has been around for decades. This is not DNSSEC-specific; DNS needs this fixed even if you are not planning on deploying DNSSEC.\nWill my DNS stop working? Probably not. OARC and the root operators have been monitoring this rollout carefully. By watching increases in TCP query traffic and packet sniffing, they have determined that the number of nameservers behind truly broken middleware boxes is quite small. Those boxes should be fixed, since this breaks any large DNS response; this is not a problem particular to DNSSEC.\nDNS is very robust. You may have timeouts of UDP, falling back to smaller UDP packet sizes or falling back to TCP but it takes a lot of broken-ness to truly make DNS unusable.\nNow that the root is signed, how long do I have to sign all my zones? As long as it takes you to determine you have a business case for doing DNSSEC, determine your trust anchor policy and your signing policy, and do a phased rollout plan of your own.\nThere is no requirement for you to sign your own zones just because the root is signed. You should sign your zones when you have a good reason to sign them.\nI would like to start using DNSSEC validation. Do I have to sign my zones? You do not have to sign your own zones in order to start doing DNSSEC validation. Odds are that you\u0026rsquo;ll want to start signing your zones at some point. But all you need to do for DNSSEC validation is to choose a trust anchor, enable DNSSEC validation in your nameserver configuration, and put the trust anchor in your nameserver configuration.\nWhat if I don\u0026rsquo;t want to do DNSSEC? Will DNS still work for me? Yes. One of the design requirements for DNSSEC was that it shouldn\u0026rsquo;t break DNS for people not using it. If you never sign your zones and never turn on DNSSEC validation, even if the rest of the Internet does, you will never notice. Your DNS will continue to work as it has in the past. The only caveat is that if you are behind one of the really broken middleware boxes, you should get it fixed. But you\u0026rsquo;ll need that for regular DNS responses larger than 512 bytes anyway.\nWhere do I go from here? Listed below are some resources for testing and further reading. ISC can also assist you with your DNSSEC deployment through training and consulting.\nTest your resolver's network path's ability to deal with EDNS0 and large UDP packets RFC2671: Extension Mechanisms for DNS (EDNS0) DNSSEC-related RFCs ","permalink":"/blogs/imminent-death-of-internet-predicted-film-at-11/","tags":["BIND","DNS","DNSSEC"],"title":"Imminent Death of Internet Predicted. Film at 11."},{"categories":["BIND","DNS"],"contents":"In the fall of 2009, the organizations responsible for generating the root zone, ICANN, Verisign, and the US Department of Commerce, announced that they had come to a agreement on how to sign the root zone with DNSSEC (DNS Security Extensions) A website has been created by ICANN and Verisign to provide information about the change and a rollout timeline.\nFirst, a signed root zone aka \u0026ldquo;DURZ\u0026rdquo; (Deliberately Unvalidatable Root Zone) - which cannot be used for validation purposes - will be deployed across all of the root servers in a phased rollout from January-May 2010. If all goes well, the fully validated root zone will be put into production on the 1st of July, 2010.\nAs one of the twelve Root Server Operators, ISC has created this blog post to answer some common questions regarding a signed root zone and what the community can do to prepare for the change. In addition, ISC will be describing what we are doing to prepare F.ROOT-SERVERS.NET to handle a signed root.\nWhy do we need to sign the root zone? DNSSEC has been developed by the Internet Engineering Task Force (IETF) so that a digital signature (RRSIG) can be applied to DNS Resource Record Set (RRSet) so that a client can verify that these records are authentic. Since DNS is a hierarchal naming system, a signed root zone means that a DNSSEC-aware client can look up a domain in say the .ORG namespace (which is one of many TLDs that have already signed their zones with DNSSEC) and can follow a completely signed (and verified) delegation path.\nWhat are some possible side effects of a signed root zone? The big change is that DNS responses from the root servers for \u0026lsquo;.\u0026rsquo; will become larger, as they will contain the answer in the form of a RRset and its signature (RRSIG).\nBefore DNSSEC, most DNS packets were over UDP and smaller than 512 bytes, which was enshrined in the early DNS-related RFCs. Since most DNS responses with signed RRsets (containing the paired RRSet and RRSIG) will exceed that 512 byte limit, the IETF developed the EDNS0 extension (RFC 2671) to allow a client to request a response larger that 512 bytes (up to 4096 bytes) over UDP via IP fragments. EDNS0 is now widely supported by many device and appliance vendors (as well as DNS server applications like BIND).\nHowever there are many devices \u0026amp; appliances in the wild that are still configured by default to only accept DNS packets smaller than 512 bytes or that don\u0026rsquo;t allow for IP fragments. In some cases the clients may try a smaller buffer size until they can get the response thru; in other cases, clients would then just fall back to TCP.\nIs there any way I can test to see if my nameserver supports EDNS0? You can check using the Reply Size Test Server developed by DNS-OARC to check and see if your resolver can accept EDNS0 packets and that your firewall (if there is one) is accepting IP fragments.\nNote that if the results are smaller than you expect and if you are running a modern DNS software package (like BIND 9.5.0 or later), then the problem may lie behind a intermediary firewall, NAT device or router between your name server and the test server. Please fix any issues you see now or you will likely experience degraded performance from the root servers once the signed root is fully rolled out.\nWhat is ISC doing to prepare F.ROOT-SERVERS.NET (F-Root) to support a signed root? ISC has been a long-standing supporter of DNSSEC, including extensive contributions to the protocol by our engineers, and has been operating F-Root in a DNSSEC-ready state for several years now by running a DNSSEC aware DNS server, etc.\nF-Root is scheduled to load the signed root zone (\u0026ldquo;DURZ\u0026rdquo;) the week of the 12th of April, 2010.\nBy then ISC will have standardized all F-Root servers to be running BIND 9.6.2, which is the first BIND release to have full support for the SHA-2 DNSSEC algorithm which will be used to sign the root zone. Note that this step is not strictly needed since the root servers are serving the content and not doing validation.\nISC will also be adjusting its monitoring across F-Root so that we are now tracking both UDP and TCP queries, as we expect a increase in TCP traffic to the root servers once the root zone is signed. We will upload any significant data events during the six-month rollout of the signed root zone to the DNS-OARC PCAP repository so that it\u0026rsquo;s available to researchers for further analysis.\n","permalink":"/blogs/the-signed-root-is-coming-and-what-this-means-for-you/","tags":["BIND","DNS","DNSSEC"],"title":"The Signed Root Is Coming! (And What This Means For You)"}]