Arduino “warning: internal error: out of range error”

There are probably a million other reasons you could get this error, but I didn’t see anyone document the case I got it: I wrote some raw assembly (i.e., hexadecimal machine code) and got the instruction length wrong. I.e., something like this:

__asm volatile(
    ".byte 0x00 \n"

Well, AVR instructions are 16 bits, so clearly the one-byte thing above wouldn’t encode a valid instruction and would ruin alignment, so it would have to look at this:

__asm volatile(
    ".byte 0x00 \n"
    ".byte 0x00 \n"

VIC-20 repair, oxidized pins

I recently had the chance to repair an NTSC VIC-20 that would not boot and just show a black screen.

The version I got to work on has a power brick containing a transformer. That power brick converts mains input to 9 volts AC, which then is fed directly into the computer. The rest of the (linear) power supply circuitry is inside the computer, including a very large capacitor. This capacitor wasn’t bulging but showed some signs of electrolyte leakage but that was not the cause of the problem. In fact I chose to skip replacing the capacitor for now.

In order to diagnose the problem I first checked the voltages, which all checked out perfectly.

Then I used an oscilloscope to check what the VIC and the CPU were doing.

The main clock signal  is generated by the VIC chip from the output of a 14.31818 MHz crystal. Everything looked perfect in this area.

I decided to reseat all the socketed IC chips (CPU, ROMs, IO chips — I was not able to extract the VIC chip even though it was socketed), but that did not help.

The CPU has a reset pin which is held low for a few seconds and then goes high, this worked perfectly too. There is a tiny 555 IC placed on the board responsible for doing this.

However there was no activity on the address lines at all; it seemed like all address lines were held high. (There is a possibility that some lines (A0~A3) were low, maybe I did not check carefully enough.) However, occasionally, right after clicking the power switch there seemed to be some normal-looking activity on the CPU’s address lines, which very quickly faded away. This happened maybe once in 10 or 20 power cycles.

So then my first suspicion was that the 6502 CPU might have given up. Thankfully, I had access to another board using a 6502 CPU. (Actually this one was a 65CS02 CPU.) As the voltages looked normal it seemed very low risk to swap the CPUs to see what would happen. Much to my dismay at first, the 6502 CPU extracted from the VIC-20 worked on the other board.

However, I was dismayed only for a few seconds, as the 65CS02 CPU, when put in the VIC-20, didn’t quite make the computer work but I was able to see a lot of activity on the address lines of the CPU now!

The new theory was that the IC pins were much more oxidized than expected. We extracted all the ICs (including VIC) again and gave them a clean up with concentrated alcohol. And it still did not work! However on the oscilloscope most pins now showed normal activity.

Thinking there might be another problem, perhaps with the ROMs, I decided to insert a game cartridge into the cartridge port.

And it booted up!

Okay, is the BASIC ROM busted?

Well. After turning the computer off and taking out the cartridge and turning it on again, it would successfully boot into BASIC! What the?!

I thought that simply reseating ICs would immediately take care of most “bad contact” problems, at least temporarily. Well, turns out that oxidation can be pretty serious sometimes!

We had even checked continuity between IC pins and the socket’s pins on the other side of the board, and got beeps as normal.

So it seems this is not a very good test. Well, today (yesterday actually) I learned.

Another note: the computer and power supply was made with 120V/60 Hz in mind according to the labels on the back, but it worked fine at 100V/50 Hz.



アマゾンやその他ショッピングサイトやユーチューブなどでレビューを色々読んで、AINX (アイネクス) の AX-S3W にしました。(Amazon: / 実際に購入した楽天のサイト (ちょっとだけ安かったので:

  • siroca 2WAY の食洗機にしなかった理由は、皆さんのレビューでは乾燥機能はあまり評価がよくなかったから。オートオープンは良さそうですが、少しだけ乾燥してからオートオープンとかは出来なそうでした。UV も良さそうですが、まぁ本当に要る?と自分に問いかけたら、そこまで要らないかな、と思いました。
  • アイリスオーヤマも乾燥機能は低評価でした。
  • Aqua も乾燥機能はやや低評価でした。
  • 東芝も乾燥機能は低評価でした。
  • パナソニックはタンク式ではないので無理でした。
  • Thanko にしなかったのも乾燥機能の低評価でした。また、給水ホースが繋げません。現在は手動で水を入れており今の住まいでは給水ホースの出番はなさそうですが、いつか引っ越したらまた変わるかもしれないので、自分の頭の中ではマイナスでした。
  • Maxzen, iimono117, moosoo は皆同じだと思います。もしかしたら、Maxzen は乾燥機能が温風ではなくただの送風かもしれない?また、少なくともアマゾンでは moosoo は現在品切れ中のようです。moosoo にすべきか、かなり迷いました。乾燥機能は一番しっかりしていて、洗浄温度も高くて洗い残しの心配はなさそうです。ただし、moosoo にしなかったのは、すすぎの温度も乾燥の温度も高くて、プラスチック製の容器は心配だったから。お急ぎモードではやや低かったような気がしますが、時間が短いです。また、「野菜洗い」というコースが良さそうですが、実はAINXの「水洗い」コースは海外のサイトで「野菜洗い」用と書かれていたので、AINXも野菜洗いができるかと思います。(ただし、野菜洗いは試したことがありません。ちょっと汚れのひどいほうれん草とかじゃがいもなら良いかも?)また、乾燥のみのコースがないそうです。
  • Kwasyo は機能が良さそうだけれども、日本市場への参入はかなり最近なのでサポート体制など、少し心配でした。ただし、海外のアマゾンでは複数のモデルが売られているようなのでそこまで心配は要らないかもしれません?→

AINX は、乾燥機能は最初の10分間は温風になるので、送風だけよりはマシかと思いました。
また、moosoo より形が格好良く、光はきれいだと思いました。(すぐに消えますが)
また、海外のアマゾンでも (“Novete” という名前になりますが) 売られています:
続いて、洗浄力です。普通の食洗機は二回洗い式です。一回目(「プレウォッシュ)は少量の洗剤と少量の水で大きなゴミを落として、水を排水してから、「メインウォッシュ」を行います。そのため、普通の食洗機には洗剤を入れるところが2つ付いていることが多いです。ただし、少なくとも AINX には一つしか付いていません。プレウォッシュはなくて、最初から本番です。
汚れが落ちないわけではありませんが、たまに、油こい料理の場合、特に上カゴ(カトラリー入れ)や下かごのプラスチック製容器が少しだけベタベタすることがありましたが、Finish のタブレットをやめてキュキュットのクリア除菌(オレンジオイル配合)の粉系の洗剤に変えたら(量や汚れ具合によって多めに入れて、)そういうことは全くなくなりました。
Finish でも、ストロングにするのもある程度効果的だったと思います。キュキュットではストロングを使うことはありません。
AINX は今年(2021年)UV 搭載のモデルをリリースするようなので、ブリーズモードの併用ができたら更に衛生的になりそうです。

ところで、オフハウスで「ちょうどいい」と思って2500円で買ったスタンディングデスクに食洗機を乗せて使っています。振動とかは特にないです。( これによく似たものです)

Useful developer tool console one-liners/bookmarklets

Many websites lack useful features, many websites go to great lengths to prevent you from downloading images, etc.

When things get too annoying I sometimes open the developer tools and write a short piece of JavaScript to help me out. Okay, but isn’t it annoying to open developer tools every time? Yes! So right-click your bookmarks toolbar, press the button to add a bookmark, give it an appropriate title, and in the URL, put “javascript:” followed by the JavaScript code. (Most of the following examples already have javascript: prepended to the one-liner.)

Note that you have to encapsulate most of these snippets in an anonymous function, i.e. (function() { … })(). Otherwise your browser might open a page containing whatever value your code snippet returns.

(It is my belief that all of the following code snippets aren’t copyrightable with a clear conscience, as they may constitute the most obvious way to do something. The code snippets can therefore be regarded as public domain, or alternatively, at your option, as published under the WTFPL.)

Note that these snippets are only tested on Firefox. (They should work in Chrome too, though.)

Hacker News: scroll to next top-level comment

Q: How likely is this to stop working if the site gets re-designed?
A: Would probably stop working.

This is a one-liner to jump to the first/second/third/… top-level comment. (Because often the first few threads get very monotonous after a while?) If you don’t see anything happening, maybe there is no other top-level comment on the current page.


javascript:document.querySelectorAll("img[src='s.gif'][width='0']")[1].scrollIntoView(true) # Bookmarklet version. Basically just javascript: added at the beginning.

The “true” parameter in scrollIntoView(true) means that the comment will appear at the top (if possible). Giving false will cause the comment to be scrolled to to appear at the bottom of your screen. Not that the scrolling isn’t perfect; the comment will be half-visible.

It would be useful to be able to remember where we last jumped to, and then jump to that + 1. We can add a global variable for that, and to be reasonably sure it doesn’t clash with an existing variable we’ll give it a name like ‘pkqcbcnll’. If the variable isn’t defined yet, we’ll get an error, so to avoid errors, we’ll use typeof to determine if we need to define the variable or not.

javascript:if (typeof pkqcbcnll == 'undefined') { pkqcbcnll = 0 }; document.querySelectorAll("img[src='s.gif'][width='0']")[pkqcbcnll++].scrollIntoView(true);

Wikipedia (and any other site): remove images on page

Q: How likely is this to stop working if the site gets re-designed?
A: Unlikely to stop working, ever.

Sometimes Wikipedia images aren’t so nice to look at, here’s a quick one-liner to get rid of them:

document.querySelectorAll("img").forEach(function(i) { i.remove() });

Instagram (and e.g. Amazon and other sites): open main image in new tab

Q: How likely is this to stop working if the site gets re-designed?
A: Not too likely to stop working.

When you search for images or look at an author’s images in the grid layout, right-click one of the images and press “open in new tab”, then use the following script to open just the image in a new tab. I.e., it works on pages like this: (random cat picture, no endorsement intended.)

Or on Amazon product pages, you’ll often get a large image overlaid on the page when you hover your cursor over a thumbnail. When you execute this one-liner in that state, you’ll get that image in a new tab for easy saving/copying/sharing. E.g., on this page: you will get this image in a new tab if you hover over that thumbnail. (Random product on Amazon, no endorsement intended.)

This script works by going through all image tags on the page and finding the source URL of the largest image. As of this writing, I believe this is the highest resolution you can get without guessing keys or reverse-engineering.

javascript:var imgs=document.getElementsByTagName("img");var height;var max_height=0;var i_for_max_height=0;for(var i=0;i<imgs.length;i++){height=parseInt(getComputedStyle(imgs[i]).getPropertyValue("height"));

xkcd: Add title text underneath image (useful for mouse-less browsing)

Q: How likely is this to stop working if the site gets re-designed?
A: May or may not survive site designs.

Useful for mouseless comic reading. There’s just one <img> with title text, so we’ll take the super-simple approach. Alternatively we could for example use #comic > img or we could select the largest image on the page as above.

javascript:document.querySelectorAll("img[title]").forEach(function(img) { document.getElementById("comic").innerHTML += "<br />
" + img.getAttribute("title"); })

Instagram: Remove login screen that appears after a while in the search results

To do this, we have to get rid of something layered on top of the page, and then remove the “overflow: hidden;” style attribute on the body tag.

A lot of pages put “overflow: hidden” on the body tags when displaying nag screens, so maybe it’s useful to have this as a separate bookmarklet. Anyway, in the following example we do both at once.

javascript:document.querySelector("div[role=presentation]").remove() ; document.body.removeAttribute("style");

Sites that block copy/paste on input elements

This snippet will present a modal prompt without any restrictions, and put that into the text input element. This example doesn’t work in iframes, and doesn’t check that we’re actually on an input element:

javascript:(function(){ document.activeElement.value = prompt("Please enter value")})()

The following snippet also handles iframes (e.g. on and should even handle nested iframes (untested). The problem is that the <iframe> element becomes the activeElement when an input element is in an <iframe>. So we’ll loop until we find an activeElement that doesn’t have a contentDocument object. And then blindly assume that we’re on an input element:

javascript:(function(){var el = document.activeElement; while (el.contentDocument) { el = el.contentDocument; } el.activeElement.value = prompt("Please enter value")})()

Removing <iframe>s to get rid of a lot of ads

Many ads on the internet use <iframe> tags. Getting rid of all of these at once may clean up pages quite a bit — but some pages actually use <iframe>s for legitimate purposes.

javascript:(function(){document.querySelectorAll("iframe").forEach(function(i) { i.remove() });})();

Amazon Music: Get list of songs in your library

I.e., songs for which you have clicked the “+” icon. Maybe you want to get a list of your songs so you can move to a different service, or maybe you just want the list. The code presented here isn’t too likely to survive a redesign, but could probably be adjusted if necessary. Some JavaScript knowledge might come in handy if you want to get this script to work.

Amazon Music makes this process rather difficult because the UI unloads and reloads elements dynamically when it thinks you have too many on the page at once.

We are talking about this page (using the default, alphabetically ordered setting), BTW:

Ideally, you would just press Ctrl+A and paste the result into an editor, or select all table cells using, e.g., Ctrl+click.

However, you’ll only get around 15 items in that case. (The previous design let you copy everything at once and was superior in other respects too, IMO.)

Anyway, we’ll use the MutationObserver to get the list. Using the MutationObserver, we’ll get notified when elements are added to the page. Then we just need to scroll all the way down and output the collected list. We may get duplicates, depending on how the page is implemented, but we’ll ignore those for now — we may have duplicates anyway if we have the same song added more than once. So I recommend you get rid of the duplicates yourself, by using e.g. sort/uniq or by loading the list into Excel or LibreOffice Calc or Google Sheets, or whatever you want.

On Amazon Music’s page, the <music-image-rows> elements that are dynamically added to the page contain four <div> elements classed col1, col2, col3, and col4. (This could of course change any time.) These <div>s contain the song name, artist name, album name, and song length, respectively, which is all we want (well, all I want). We’ll just use querySelectorAll on the newly added element to select .col1, .col2, .col3, and .col4 and output the textContent to the console. Occasionally, a parent element pops up that contains all the previous .col* <div> elements. We’ll ignore that by only evaluating selections that have exactly four elements.

  1. Scroll to top of page
  2. Execute code (e.g., by opening console and pasting in the code)
  3. Slowly scroll to the end of the page
  4. Execute observer.disconnect() in the console (otherwise text will keep popping up if you scroll)
  5. Select and copy all text in the console (or use right-click → Export Visible Messages To), paste in an editor or (e.g.) Excel. There are some Find&Replace regular expressions below that you could use to post-process the output.
  6. Note that the first few entries in the list (I think sometimes it’s just the first one, sometimes it’s the first four) are never going to be newly added to the document, so you will have to copy them into your text file yourself.

The code’s output is rather spreadsheet-friendly, tab-deliminated and one line per entry. You can just paste that into your favorite spreadsheet software.

The code cannot really be called a one-liner at this point, but feel free to re-format it and package it as a bookmarklet, if you want.

observer = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
if (mutation.type === "childList") {
if ( && mutation.addedNodes.length) {
var string = "";
selector =".col1, .col2, .col3, .col4");
if (selector.length == 4) {
selector.forEach(function(e) { if (e.textContent) string += e.textContent + "\t" });
string += "----------\n"
observer.observe(document, { childList: true, subtree: true });

Post-processing regular expressions (the “debugger eval code” one may be Firefox-specific):

Find: [0-9 ]*debugger eval code.*\n
Replace: (leave empty)

Find: \n----------\n
Replace: \n
(Do this until there are no more instances of the "Find" expression)

Find: \t----------
Replace: (leave empty)

There are a lot of duplicates. You can get rid of them either by writing your list to a file and then executing, e.g.: sort -n amazon_music.txt | uniq > amazon_music_uniq.txt
Or you can get Excel to do the work for you.
Make sure that you have the correct number of lines. (The Amazon Music UI enumerates all entries in the list. You would want the same number of lines in your text file.)

Outputting QR codes on the terminal

In these dark ages, a lot of software (mostly chat apps) only work on smartphones. While it’s easy to connect a USB-OTG hub to most smartphones (even my dirt-cheap Android smartphone supports this (now three years old)), having two keyboards on your desk can be kind of annoying.

While there are a bunch of possible solutions to this problem, many of these solutions do not fix the problem when you’re not on your home setup. Which is why I often just use QR codes to send URLs to my phone, and there are a lot of QR code generator sites out there.

QR code generator sites are useful because they work everywhere, but many are slow and clunky. Perhaps acceptable in a pinch, but… what if you could just generate QR codes on the terminal?

Well, some cursory googling revealed this library:, which doesn’t have any external (non-standard library) dependencies, is short enough to skim over for malicious code, and comes with an easily adapted example. (I am reasonably confident that there is no malicious code at ad8353b4581fa11fc01a50ebf56db3833462fc13.)

Note: I very rarely use Go. Here is what I did to compile this:

$ git clone
$ mkdir src
$ mv qrencode/ src/

$ cat > qrcodegenerator.go
package main
import (
func main() {
var buf bytes.Buffer
for i, arg := range os.Args {
if i > 1 {
if err := buf.WriteByte(' '); err != nil {
if i > 0 {
if _, err := buf.WriteString(arg); err != nil {
grid, err := qrencode.Encode(buf.String(), qrencode.ECLevelQ)
if err != nil {
$ GOPATH=$PWD go build qrcodegenerator.go
$ ./qrcodegenerator test

Note: the above code is adapted from example code in the file and is therefore LGPL3.

Since Go binaries are static (that’s what I’ve heard at least), you can then move the executable anywhere you like (e.g. ~/bin) and generate QR codes anywhere. Note that they’re pretty huge, i.e. for ‘’ (26 bytes) the QR code’s width will be 62 characters. For e.g. ‘’ (this post) the width is 86 characters.

A simple netcat-based DNS server that returns NXDOMAIN on everything

sudo ncat -i1 -k -c "perl -e 'read(STDIN, \$dns_input, 2); \$dns_id = pack \"a2\", \$dns_input; print \"\$dns_id\x81\x83\x00\x00\x00\x00\x00\x00\x00\x00\";'" -u -vvvvvv -l 53
  • A DNS request contains two random bytes at the beginning that have to appear in the first two bytes in the response.
  • The DNS flags for an NXDOMAIN response are 0x81 0x83
  • The rest of the bytes can be 0, which mostly means that we have zero other sections in our response
  • The below example uses nmap-ncat, as found in Red Hat-based distributions, but can also be installed on Debian-based distributions (apt-get install ncat)
  • -i1 causes connections to be discarded after 1 second of idle time (optional)
  • -k means that we can accept more than one connection
  • -c means that whatever we get from the other side of the connection gets piped to a perl process running in a shell process (maybe -e is the same in this case)
  • -u means UDP (leaving this away should work if you do DNS over TCP)
  • -vvvvvv means that we can see what’s happening (optional)
  • -l means that we’re listening rather than sending, on, port 53
  • read(STDIN, $dns_input, 2) # read exactly two bytes from STDIN
  • $dns_id = pack “a2”, $dns_input # two bytes of arbitrary random data from $dns_input will be put into $dns_id
  • print “$dns_id\x81\x83\x00\x00\x00\x00\x00\x00\x00\x00” # sends $dns_id, NXDOMAIN, and zeros as described above to the other side
  • Note: I didn’t really test this beyond the proof-of-concept stage. If anything’s iffy, feel free to let me know.

Slow DNS in Docker containers without internet connection

If you’re running a Docker container on a Docker network that should _normally_ have internet access, but doesn’t (for whatever reason, see next paragraph for an example), you might find that DNS lookups in that Docker container will be very, very slow. If the DNS lookup “freezes” your program (prevents your program from serving further requests for a short while, etc.), this can be very inconvenient. (For example, and this is how I noticed the problem: if you’re ssh’ing into a container using dynamic port-forwarding to access other containers, every DNS lookup will freeze your ssh connection.)

In my case, I’m running a (prototype? beta?) test environment that is generally not supposed to connect to the internet to avoid “accidentally” doing silly things in production. However, a few sites have to be whitelisted, and whitelisting has to be done on a DNS basis. If you have similar needs, the solution here might help you. Though it’s hacky.

Diving in

I decided to dive in and see if I can change this behavior at all. Normal DNS failure time:

$ time curl
curl: (6) Could not resolve host:; Unknown error

real 0m0.009s
user 0m0.005s
sys 0m0.000s

In a Docker container:

$ time curl
curl: (6) Could not resolve host:; Unknown error

real 0m20.599s
user 0m0.010s
sys 0m0.010s

When you don’t have internet access, your host system will in most cases lack a ‘default’ root in the output of ‘ip route’. However, your containers don’t know anything about your host system’s routing tables and your container’s namespace will still have the default route.

Note: Docker uses the host’s /etc/resolv.conf to figure out where to forward DNS requests to, and if /etc/resolv.conf doesn’t specify any servers, Docker will use and/or by default. You can override the default in /etc/docker/daemon.json. (I believe you have to restart Docker after changing the file, sending a kill -HUP will reload some settings specified in the file, but not this one AFAICT). Anyway, the below examples will all show or

When you do curl in a Docker container, it will send this DNS request to Docker’s internal DNS resolver, as specified in the container‘s /etc/resolv.conf:


One easy thing we can do is add the following to the container’s /etc/resolv.conf. This speeds things up quite a bit:

options timeout:1 attempts:1
$ time curl
curl: (6) Could not resolve host:; Unknown error

real 0m2.541s
user 0m0.010s
sys 0m0.014s

(The following implementation details were gathered from stracing the dockerd process, and might change in the future.) This nameserver runs in the dockerd process, but the dockerd process switches to the container’s network name space before forwarding the request:

# strace -vvvttf -p $dockerd_pid
2124 03:18:52.686952 openat(AT_FDCWD, "/var/run/docker/netns/dd995925297c", O_RDONLY) = 20
2124 03:18:52.686999 setns(20, CLONE_NEWNET) = 0
2124 03:18:52.687088 setsockopt(21, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
2124 03:18:52.687119 connect(21, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = 0

Tangent: here’s one way to ascertain which namespace this is:

# nsenter --net=/var/run/docker/netns/dd995925297c ip a
inet brd scope global eth1

Then you can just issue docker inspect or docker network inspect commands to figure out which container this IP belongs to.

And back: if you do the same UDP connection with the same parameters on the host system, DNS requests fail straight away, because the kernel is sensible enough to notice that there is no route to this host:

08:06:25.289858 setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
08:06:25.289981 connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = -1 ENETUNREACH (Network is unreachable)

So one thing I thought we could do is to force this connect call to fail. I thought one way might be to get Docker to use TCP to connect to DNS servers. This can be accomplished in /etc/docker/daemon.json like this:

"dns-opt": "use-vc"

Unfortunately that didn’t help because of the SOCK_NONBLOCK flag. The strace now looks like this:

1580 18:55:02.447571 setsockopt(34, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
1580 18:55:02.447612 connect(34, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = -1 EINPROGRESS (Operation now in progress)

(Note: adding attempts:1 timeout:1 to dns-opt didn’t seem to have an effect.)

Furthermore, removing the default route doesn’t help either. curl still blocks for ~2.526 seconds. Docker keeps retrying even if it gets ENETUNREACH:

# nsenter --net=/var/run/docker/netns/dd995925297c ip route del default
19:39:44.614836 connect(21, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("")}, 16) = -1 ENETUNREACH (Network is unreachable)

So up to now we have avoided reading Docker’s source code (it’s written in a weird, foreign language), but at this point we have no choice but to take a look. Perhaps my assumptions were wrong and Docker’s DNS is actually noticing there is a problem, but just retries no matter what. So the first thing we do is enable debug logs just so we can hopefully get some messages that are output close to the code we have to look at it. We just need to set /etc/docker/daemon.json and reload (kill -HUP) dockerd and run journalctl -xeu docker

"debug": true

Here are some logs we got after we removed our default route:

20-04-05T19:59:29.285616784+09:00" level=warning msg="[resolver] connect failed: dial udp connect: network is unreachable"
20-04-05T19:59:29.285646884+09:00" level=warning msg="[resolver] connect failed: dial udp connect: network is unreachable"

And here are some logs we got with the default route still existing:

20-04-05T19:30:14.118994138+09:00" level=debug msg="[resolver] read from DNS server failed, read udp> i/o timeout"20-04-05T19:30:17.115909962+09:00" level=debug msg="[resolver] read from DNS server failed, read udp> i/o timeout"

Searching the docker code takes us straight to the ServeDNS function in docker-ce/components/engine/vendor/ (I did a fresh clone today; the newest commit in my git log is 92768b32964e3037e520ab8e74fe190c39f4c83d. The code may look different in your version.)

So we’ve got a long for loop on line 432 that has some breaks and some continues and uses a couple hard-coded constants here and there.

maxExtDNS       = 3 //max number of external servers to try
extIOTimeout = 4 * time.Second
for i := 0; i < maxExtDNS; i++ {

One thing we could do is look for a workaround, i.e. anything that will make this function terminate earlier. Unfortunately, I didn’t have any success on that front. So it looks like we currently have no choice but to set up our own DNS server just to prevent Docker from freezing our ssh connections (or whatever software is being frozen in your case).

Setting up our own DNS server

Unfortunately, we can’t just run dnsmasq on the host and make it listen on obviously means something different inside a container than it does on the host. We generally can’t use any of the other IPs the host machine may have either — Docker most likely adds iptables rules that block containers from communicating with those IPs. (Though we could manually delete these iptables rules.)

So one way around this problem is to run a Docker container that runs dnsmasq. Unfortunately, this can get a bit messy — we need to use a static IP on that container, and we need to configure our host to use that IP in its /etc/resolv.conf.

One kind of nice — though very hacky — solution I came up with is to add a network and two containers that pretend to be (or one container and two networks).

### fake-dns/Dockerfile
FROM centos
RUN yum -y install dnsmasq
ENTRYPOINT /usr/sbin/dnsmasq --server=/ -R
$ docker build --tag fake-dns fake-dns
$ docker network create fake-dns-net --subnet
$ docker run -d --network fake-dns-net -it --ip fake-dns
$ docker run -d --network fake-dns-net -it --ip fake-dns

$ docker connect fake-dns-net existing-container

Connecting your already running containers that you need to fix DNS on to fake-dns-net will make DNS requests immediately get NXDOMAIN responses from our fake and servers, and will take care of any freezing issues you may have, all while DNS requests for other container names will still work as normal. (In this example, is whitelisted and will be forwarded to Cloudflare’s DNS servers. The -R flag on the dnsmasq command means that dnsmasq will ignore any servers listed in /etc/resolv.conf)


In my opinion, Docker’s automatic fallback to isn’t the greatest Docker feature to say the least. Perhaps there should be a way to tell Docker not to fall back, and to send NXDOMAIN when it can’t answer anything by itself.

Unfortunately, this “hack” is likely not that future-proof. Future versions of Docker could for example expand or change the list of fallback servers, and this hack would have to be adapted to Docker’s changes. However, explicitly specifying { dns: [“″, ”″] } in /etc/docker/daemon.json could perhaps take care of this problem. (I haven’t actually tested that.)

Bash: how to put command output/file contents on the command line

In certain situations, you may want to have a command that you e.g. saved in a file on your command line before executing it. As I didn’t find an answer straight away, here’s a quick-and-dirty way to do it that executes a command when you press a keyboard shortcut:

bind -x '"\C-g":"READLINE_LINE=$(cat dnsmasq_command)"'

This will put the contents of the file named ‘dnsmasq_command’ on your command line when you press Control-G.

And while we’re at it, while it’s got nothing to do with this article, here’s a binding that puts a time stamp (like 20200408022000, no, not a palindrome) on your command line:


Skype for Business fortunes

Skype for Business:
    Accept any substitute.
    If it's broke, don't fix it.
    If it ain't broke, fix it.
    Form follows malfunction.
    The Cutting Edge of Obsolescence.
    The trailing edge of software technology.
    Armageddon never looked so good.
    Japan's secret weapon.
    You'll envy the dead.
    Making the world safe for competing communication tools.
    Let it get in YOUR way.
    The problem for your problem.
    If it starts working, we'll fix it.  Pronto.
    It could be worse, but it'll take time.
    Simplicity made complex.
    The greatest productivity aid since typhoid.
    Flakey and built to stay that way.
One thousand monkeys.  One thousand Windows 8 machines.  One thousand years.
    Skype for Business.
Skype for Business:
    It's not how slow you make it.  It's how you make it slow.
    The communication tool preferred by masochists 3 to 1.
    Built to take on the world... and lose!
    Don't try it 'til you've knocked it.
    Power tools for Power Fools.
    Putting new limits on productivity.
    The closer you look, the cruftier we look.
    Design by counterexample.
    A new level of software disintegration.
    No hardware is safe.
    Do your time.
    Rationalization, not realization.
    Old-world software cruftsmanship at its finest.
    Gratuitous incompatibility.
    Your mother.
    THE user interference management system.
    You can't argue with failure.
    You haven't died 'til you've used it.
The environment of today... tomorrow!
    Skype for Business.
Skype for Business:
    Something you can be ashamed of.
    30% more entropy than the leading communication tool.
    The first fully modular software disaster.
    Rome was destroyed in a day.
    Warn your friends about it.
    Climbing to new depths.  Sinking to new heights.
    An accident that couldn't wait to happen.
    Don't wait for the movie.
    Never use it after a big meal.
    Need we say less?
    Plumbing the depths of human incompetence.
    It'll make your day.
    Don't get frustrated without it.
    Power tools for power losers.
    A software disaster of Biblical proportions.
    Never had it.  Never will.
    The software with no visible means of support.
    More than just a generation behind.
Hindenburg.  Titanic.  Edsel.
    Skype for Business.
Skype for Business:
    The ultimate bottleneck.
    Flawed beyond belief.
    The only thing you have to fear.
    Somewhere between chaos and insanity.
    On autopilot to oblivion.
    The joke that kills.
    A disgrace you can be proud of.
    A mistake carried out to perfection.
    Belongs more to the problem set than the solution set.
    To err is Skype for Business.
    Ignorance is our most important resource.
    Complex nonsolutions to simple nonproblems.
    Built to fall apart.
    Nullifying centuries of progress.
    Falling to new depths of inefficiency.
    The last thing you need.
    The defacto substandard.
Elevating brain damage to an art form.
    Skype for Business.
Skype for Business:
    We will dump no core before its time.
    One good crash deserves another.
    A bad idea whose time has come.  And gone.
    We make excuses.
    It didn't even look good on paper.
    You laugh now, but you'll be laughing harder later!
    A new concept in abuser interfaces.
    How can something get so bad, so quickly?
    It could happen to you.
    The art of incompetence.
    You have nothing to lose but your lunch.
    When uselessness just isn't enough.
    More than a mere hindrance.  It's a whole new barrier!
    When you can't afford to be right.
    And you thought we couldn't make it worse.
If it works, it isn't Skype for Business.
Skype for Business:
    You'd better sit down.
    Don't laugh.  It could be YOUR thesis project.
    Why do it right when you can do it wrong?
    Live the nightmare.
    Our bugs run faster.
    When it absolutely, positively HAS to crash overnight.
    There ARE no rules.
    You'll wish we were kidding.
    Everything you never wanted in a communication tool.  And more.
    Dissatisfaction guaranteed.
    There's got to be a better way.
    The next best thing to keypunching.
    Leave the thrashing to us.
    We wrote the book on core dumps.
    Even your dog won't like it.
    More than enough rope.
    Garbage at your fingertips.
Incompatibility.  Shoddiness.  Uselessness.
    Skype for Business.

Adapted from

Standing on the shoulder of giants

Making new discoveries usually requires knowledge from previous discoveries. Discoveries are published as text in research papers. Scientists find these research papers using keywords they plug into a search engine or when other research papers cite them. Being able to search by keywords has obviously made scientific research much easier than before.

However, the research papers themselves are still written in natural languages, and unless the authors avoid using grammatical features like the words “it” or “them” to avoid repeating nouns over and over again, it’s difficult to extract meaning from them.

So perhaps people shouldn’t be writing research papers in (e.g.) English and instead use something that is easier for computers to parse. The research paper would contain claims, some numeric indicator for how certain these claims are (or from where they came), how these claims came about (e.g. experiment (with instructions for how and experiment was set up and why), logical reasoning (with the actual chain of reasoning), etc.). Then you could, for example, search for claims, instead of just keywords.

Instead of just using the keywords “ocean plastic waste”, you could search for “plastic waste in oceans negatively impacts ocean fauna”. Instead of typing in this query, you would choose every noun, verb and other qualifiers from a searchable menu. You would be able to display this menu in any language. The words in the menu would all mean a specific thing, and anybody would be able to look at the definitions and properties of every word. Eventually, the system could be programmed to (e.g.) look at two different claims and judge whether these claims might be related, based on common subjects or objects, and generate potential new claims all by itself. For example, claim 1: “Thunderstorms may under certain circumstances cause blackouts” and claim 2: “Certain atmospheric conditions cause thunderstorms” would logically mean that “certain atmospheric conditions cause blackouts”, which is true. The system probably shouldn’t let users use the word “certain” though.

How hard would it be to make such a system? Well, it’s best to start in a single niche and slowly expand the system. Here’s a very simple (completely static) example that is a bit different in that it just generates natural English or Japanese text based on what values are selected in the menus:

Saving the selections would be a first step in the right direction. Currently, only one “allows attackers to:” is allowed, but you would probably want to 1) allow an arbitrary-length chain 2) something indicating the author’s degree of certainty (“may allow”), 3) add other verbs, other nouns, and 4) the reasoning behind a set of these claims (it gets complex here, but in the end you just have to select subjects, verbs, and objects, ifs, etc.).

Perhaps selecting things from a menu would get old quick (unless you just do it for the main claims). So the computer could try its best to analyze a sentence and ask the user to confirm that its interpretation is correct. The preceding sentence could be analyzed like this, but you’ll see that this could be tricky and just selecting something from a menu might be the better option after all: So [potential solution for previous follows:] the computer [the software] could try its best [could potentially (but probably wouldn’t manage to do this perfectly)] to analyze a sentence [analyze a given input sentence] and [afterwards] ask the user to [force the user] confirm that its [the software’s] interpretation [of the given input sentence] is correct [confirm … or force the user to edit the interpretation using a menu]. As you can see, there is a lot of context involved, which makes it hard to derive (e.g.) claims from natural text.

As you can see in the CVE generator, having the facts makes it somewhat easy for the system to generate text in any language for humans to read and think about