卓上型食洗機を買った!

Update 2023/10/28: 買ってから2年4ヶ月経ちましたが故障なく毎日食器を洗ってくれています。最近はキュキュットの「ウルトラクリーン」という液体洗剤を使っています。中性洗剤なので、カゴのサビとかも進化しなくなったのかな、と思います。中性洗剤だとあまりきれいにならないのでは?と思う方もいらっしゃると思いますが、かなり満足しています。
※ただし、焼き魚は嫌な匂いが残る場合があります。また、とこびりついた卵は落ちない場合があります。

この度、念願の食洗機を購入したぜ!

アマゾンやその他ショッピングサイトやユーチューブなどでレビューを色々読んで、AINX (アイネクス) の AX-S3W にしました。(Amazon: https://www.amazon.co.jp/dp/B07ZNDHJ7Y / 実際に購入した楽天のサイト (ちょっとだけ安かったので: https://item.rakuten.co.jp/a-price/4582519730020/)

  • siroca 2WAY の食洗機にしなかった理由は、皆さんのレビューでは乾燥機能はあまり評価がよくなかったから。オートオープンは良さそうですが、少しだけ乾燥してからオートオープンとかは出来なそうでした。UV も良さそうですが、まぁ本当に要る?と自分に問いかけたら、そこまで要らないかな、と思いました。
  • アイリスオーヤマも乾燥機能は低評価でした。
  • Aqua も乾燥機能はやや低評価でした。
  • 東芝も乾燥機能は低評価でした。
  • パナソニックはタンク式ではないので無理でした。
  • Thanko にしなかったのも乾燥機能の低評価でした。また、給水ホースが繋げません。現在は手動で水を入れており今の住まいでは給水ホースの出番はなさそうですが、いつか引っ越したらまた変わるかもしれないので、自分の頭の中ではマイナスでした。
  • Maxzen, iimono117, moosoo は皆同じだと思います。もしかしたら、Maxzen は乾燥機能が温風ではなくただの送風かもしれない?また、少なくともアマゾンでは moosoo は現在品切れ中のようです。moosoo にすべきか、かなり迷いました。乾燥機能は一番しっかりしていて、洗浄温度も高くて洗い残しの心配はなさそうです。ただし、moosoo にしなかったのは、すすぎの温度も乾燥の温度も高くて、プラスチック製の容器は心配だったから。お急ぎモードではやや低かったような気がしますが、時間が短いです。また、「野菜洗い」というコースが良さそうですが、実はAINXの「水洗い」コースは海外のサイトで「野菜洗い」用と書かれていたので、AINXも野菜洗いができるかと思います。(ただし、野菜洗いは試したことがありません。ちょっと汚れのひどいほうれん草とかじゃがいもなら良いかも?)また、乾燥のみのコースがないそうです。
  • Kwasyo は機能が良さそうだけれども、日本市場への参入はかなり最近なのでサポート体制など、少し心配でした。ただし、海外のアマゾンでは複数のモデルが売られているようなのでそこまで心配は要らないかもしれません?→ https://www.amazon.com/Kwasyo-Portable-Countertop-Dishwasher-High-temperature/dp/B08V5PNMW5

AINX は、乾燥機能は最初の10分間は温風になるので、送風だけよりはマシかと思いました。
実際に使ってみると、くぼみに水たまりはできて、乾燥完了後も水が残りますが、くぼみを下向きにしたり、すすぎ後、ひっくり返したりすると完全に乾きます。また、数日間続く「ブリーズモード」(毎時15分間送風乾燥を行う機能)も衛生的で良いなぁと思いました。
また、moosoo より形が格好良く、光はきれいだと思いました。(すぐに消えますが)
また、海外のアマゾンでも (“Novete” という名前になりますが) 売られています: https://www.amazon.com/dp/B089YD3Z91/
海外のアマゾンのレビューも良かったですし、ユーチューブにも海外のユーチューバーさんが良いレビューを残しています: https://www.youtube.com/watch?v=9F1c1pxvicw
結局中国の名前の知らない会社がデザインして製造した物のようです。色々な地域で実績があると、私の頭の中ではそれなりに信用性が上がると思います。
続いて、洗浄力です。普通の食洗機は二回洗い式です。一回目(「プレウォッシュ)は少量の洗剤と少量の水で大きなゴミを落として、水を排水してから、「メインウォッシュ」を行います。そのため、普通の食洗機には洗剤を入れるところが2つ付いていることが多いです。ただし、少なくとも AINX には一つしか付いていません。プレウォッシュはなくて、最初から本番です。
汚れが落ちないわけではありませんが、たまに、油こい料理の場合、特に上カゴ(カトラリー入れ)や下かごのプラスチック製容器が少しだけベタベタすることがありましたが、Finish のタブレットをやめてキュキュットのクリア除菌(オレンジオイル配合)の粉系の洗剤に変えたら(量や汚れ具合によって多めに入れて、)そういうことは全くなくなりました。
Finish でも、ストロングにするのもある程度効果的だったと思います。キュキュットではストロングを使うことはありません。
AINX は今年(2021年)UV 搭載のモデルをリリースするようなので、ブリーズモードの併用ができたら更に衛生的になりそうです。

ところで、オフハウスで「ちょうどいい」と思って2500円で買ったスタンディングデスクに食洗機を乗せて使っています。振動とかは特にないです。(https://direct.sanwa.co.jp/ItemPage/100-DESKF009 これによく似たものです)
下に置いたバケツで排水しています。13Lのものですが毎回使用後、水を捨てています。スタンディングデスクは高さがあるので、排水ホースがずれる心配はあまりありません。スタンディングデスクは広いので下にゴミ箱も収入しています。
給水は付属の容器でやっています。実は洗面台はすぐそこなので付属の容器で3回給水しても、そこそこ楽です。コック付きの容量がちょうどの折り畳みバケツを使っている人も多いようです。

Useful developer tool console one-liners/bookmarklets

Many websites lack useful features, many websites go to great lengths to prevent you from downloading images, etc.

When things get too annoying I sometimes open the developer tools and write a short piece of JavaScript to help me out. Okay, but isn’t it annoying to open developer tools every time? Yes! So right-click your bookmarks toolbar, press the button to add a bookmark, give it an appropriate title, and in the URL, put “javascript:” followed by the JavaScript code. (Most of the following examples already have javascript: prepended to the one-liner.)

Note that you have to encapsulate most of these snippets in an anonymous function, i.e. (function() { … })(). Otherwise your browser might open a page containing whatever value your code snippet returns.

(It is my belief that all of the following code snippets aren’t copyrightable with a clear conscience, as they may constitute the most obvious way to do something. The code snippets can therefore be regarded as public domain, or alternatively, at your option, as published under the WTFPL.)

Note that these snippets are only tested on Firefox. (They should work in Chrome too, though.)

Hacker News: scroll to next top-level comment

Q: How likely is this to stop working if the site gets re-designed?
A: Would probably stop working.

This is a one-liner to jump to the first/second/third/… top-level comment. (Because often the first few threads get very monotonous after a while?) If you don’t see anything happening, maybe there is no other top-level comment on the current page.

document.querySelectorAll("img[src='s.gif'][width='0']")[0].scrollIntoView(true)
document.querySelectorAll("img[src='s.gif'][width='0']")[1].scrollIntoView(true)
document.querySelectorAll("img[src='s.gif'][width='0']")[2].scrollIntoView(true)

javascript:document.querySelectorAll("img[src='s.gif'][width='0']")[1].scrollIntoView(true) # Bookmarklet version. Basically just javascript: added at the beginning.

The “true” parameter in scrollIntoView(true) means that the comment will appear at the top (if possible). Giving false will cause the comment to be scrolled to to appear at the bottom of your screen. Not that the scrolling isn’t perfect; the comment will be half-visible.

It would be useful to be able to remember where we last jumped to, and then jump to that + 1. We can add a global variable for that, and to be reasonably sure it doesn’t clash with an existing variable we’ll give it a name like ‘pkqcbcnll’. If the variable isn’t defined yet, we’ll get an error, so to avoid errors, we’ll use typeof to determine if we need to define the variable or not.

javascript:if (typeof pkqcbcnll == 'undefined') { pkqcbcnll = 0 }; document.querySelectorAll("img[src='s.gif'][width='0']")[pkqcbcnll++].scrollIntoView(true);

Wikipedia (and any other site): remove images on page

Q: How likely is this to stop working if the site gets re-designed?
A: Unlikely to stop working, ever.

Sometimes Wikipedia images aren’t so nice to look at, here’s a quick one-liner to get rid of them:

document.querySelectorAll("img").forEach(function(i) { i.remove() });

Instagram (and e.g. Amazon and other sites): open main image in new tab

Q: How likely is this to stop working if the site gets re-designed?
A: Not too likely to stop working.

When you search for images or look at an author’s images in the grid layout, right-click one of the images and press “open in new tab”, then use the following script to open just the image in a new tab. I.e., it works on pages like this: https://www.instagram.com/p/CSSx_E3pmId/ (random cat picture, no endorsement intended.)

Or on Amazon product pages, you’ll often get a large image overlaid on the page when you hover your cursor over a thumbnail. When you execute this one-liner in that state, you’ll get that image in a new tab for easy saving/copying/sharing. E.g., on this page: https://www.amazon.co.jp/dp/B084H7ZYTT you will get this image in a new tab if you hover over that thumbnail. (Random product on Amazon, no endorsement intended.)

This script works by going through all image tags on the page and finding the source URL of the largest image. As of this writing, I believe this is the highest resolution you can get without guessing keys or reverse-engineering.

javascript:var imgs=document.getElementsByTagName("img");var height;var max_height=0;var i_for_max_height=0;for(var i=0;i<imgs.length;i++){height=parseInt(getComputedStyle(imgs[i]).getPropertyValue("height"));
if(height>max_height){max_height=height;i_for_max_height=i;}}open(imgs[i_for_max_height].getAttribute("src"));

xkcd: Add title text underneath image (useful for mouse-less browsing)

Q: How likely is this to stop working if the site gets re-designed?
A: May or may not survive site designs.

Useful for mouseless comic reading. There’s just one <img> with title text, so we’ll take the super-simple approach. Alternatively we could for example use #comic > img or we could select the largest image on the page as above.

javascript:document.querySelectorAll("img[title]").forEach(function(img) { document.getElementById("comic").innerHTML += "<br />
" + img.getAttribute("title"); })

Instagram: Remove login screen that appears after a while in the search results

To do this, we have to get rid of something layered on top of the page, and then remove the “overflow: hidden;” style attribute on the body tag.

A lot of pages put “overflow: hidden” on the body tags when displaying nag screens, so maybe it’s useful to have this as a separate bookmarklet. Anyway, in the following example we do both at once.

javascript:document.querySelector("div[role=presentation]").remove() ; document.body.removeAttribute("style");

Sites that block copy/paste on input elements

This snippet will present a modal prompt without any restrictions, and put that into the text input element. This example doesn’t work in iframes, and doesn’t check that we’re actually on an input element:

javascript:(function(){ document.activeElement.value = prompt("Please enter value")})()

The following snippet also handles iframes (e.g. on https://www.w3schools.com/tags/tryit.asp?filename=tryhtml_input_test) and should even handle nested iframes (untested). The problem is that the <iframe> element becomes the activeElement when an input element is in an <iframe>. So we’ll loop until we find an activeElement that doesn’t have a contentDocument object. And then blindly assume that we’re on an input element:

javascript:(function(){var el = document.activeElement; while (el.contentDocument) { el = el.contentDocument; } el.activeElement.value = prompt("Please enter value")})()

Removing <iframe>s to get rid of a lot of ads

Many ads on the internet use <iframe> tags. Getting rid of all of these at once may clean up pages quite a bit — but some pages actually use <iframe>s for legitimate purposes.

javascript:(function(){document.querySelectorAll("iframe").forEach(function(i) { i.remove() });})();

Amazon Music: Get list of songs in your library

I.e., songs for which you have clicked the “+” icon. Maybe you want to get a list of your songs so you can move to a different service, or maybe you just want the list. The code presented here isn’t too likely to survive a redesign, but could probably be adjusted if necessary. Some JavaScript knowledge might come in handy if you want to get this script to work.

Amazon Music makes this process rather difficult because the UI unloads and reloads elements dynamically when it thinks you have too many on the page at once.

We are talking about this page (using the default, alphabetically ordered setting), BTW:

Ideally, you would just press Ctrl+A and paste the result into an editor, or select all table cells using, e.g., Ctrl+click.

However, you’ll only get around 15 items in that case. (The previous design let you copy everything at once and was superior in other respects too, IMO.)

Anyway, we’ll use the MutationObserver to get the list. Using the MutationObserver, we’ll get notified when elements are added to the page. Then we just need to scroll all the way down and output the collected list. We may get duplicates, depending on how the page is implemented, but we’ll ignore those for now — we may have duplicates anyway if we have the same song added more than once. So I recommend you get rid of the duplicates yourself, by using e.g. sort/uniq or by loading the list into Excel or LibreOffice Calc or Google Sheets, or whatever you want.

On Amazon Music’s page, the <music-image-rows> elements that are dynamically added to the page contain four <div> elements classed col1, col2, col3, and col4. (This could of course change any time.) These <div>s contain the song name, artist name, album name, and song length, respectively, which is all we want (well, all I want). We’ll just use querySelectorAll on the newly added element to select .col1, .col2, .col3, and .col4 and output the textContent to the console. Occasionally, a parent element pops up that contains all the previous .col* <div> elements. We’ll ignore that by only evaluating selections that have exactly four elements.

  1. Scroll to top of page
  2. Execute code (e.g., by opening console and pasting in the code)
  3. Slowly scroll to the end of the page
  4. Execute observer.disconnect() in the console (otherwise text will keep popping up if you scroll)
  5. Select and copy all text in the console (or use right-click → Export Visible Messages To), paste in an editor or (e.g.) Excel. There are some Find&Replace regular expressions below that you could use to post-process the output.
  6. Note that the first few entries in the list (I think sometimes it’s just the first one, sometimes it’s the first four) are never going to be newly added to the document, so you will have to copy them into your text file yourself.

The code’s output is rather spreadsheet-friendly, tab-deliminated and one line per entry. You can just paste that into your favorite spreadsheet software.

The code cannot really be called a one-liner at this point, but feel free to re-format it and package it as a bookmarklet, if you want.

observer = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
if (mutation.type === "childList") {
if (mutation.target && mutation.addedNodes.length) {
var string = "";
selector = mutation.target.querySelectorAll(".col1, .col2, .col3, .col4");
if (selector.length == 4) {
selector.forEach(function(e) { if (e.textContent) string += e.textContent + "\t" });
string += "----------\n"
console.log(string);
}
}
}
});
});
observer.observe(document, { childList: true, subtree: true });

Post-processing regular expressions (the “debugger eval code” one may be Firefox-specific):

Find: [0-9 ]*debugger eval code.*\n
Replace: (leave empty)

Find: \n----------\n
Replace: \n
(Do this until there are no more instances of the "Find" expression)

Find: \t----------
Replace: (leave empty)

There are a lot of duplicates. You can get rid of them either by writing your list to a file and then executing, e.g.: sort -n amazon_music.txt | uniq > amazon_music_uniq.txt
Or you can get Excel to do the work for you.
Make sure that you have the correct number of lines. (The Amazon Music UI enumerates all entries in the list. You would want the same number of lines in your text file.)

Outputting QR codes on the terminal

In these dark ages, a lot of software (mostly chat apps) only work on smartphones. While it’s easy to connect a USB-OTG hub to most smartphones (even my dirt-cheap Android smartphone supports this (now three years old)), having two keyboards on your desk can be kind of annoying.

While there are a bunch of possible solutions to this problem, many of these solutions do not fix the problem when you’re not on your home setup. Which is why I often just use QR codes to send URLs to my phone, and there are a lot of QR code generator sites out there.

QR code generator sites are useful because they work everywhere, but many are slow and clunky. Perhaps acceptable in a pinch, but… what if you could just generate QR codes on the terminal?

Well, some cursory googling revealed this library: https://github.com/qpliu/qrencode-go, which doesn’t have any external (non-standard library) dependencies, is short enough to skim over for malicious code, and comes with an easily adapted example. (I am reasonably confident that there is no malicious code at ad8353b4581fa11fc01a50ebf56db3833462fc13.)

Note: I very rarely use Go. Here is what I did to compile this:

$ git clone https://github.com/qpliu/qrencode-go
$ mkdir src
$ mv qrencode/ src/

$ cat > qrcodegenerator.go
package main
import (
"bytes"
"os"
"qrencode"
)
func main() {
var buf bytes.Buffer
for i, arg := range os.Args {
if i > 1 {
if err := buf.WriteByte(' '); err != nil {
panic(err)
}
}
if i > 0 {
if _, err := buf.WriteString(arg); err != nil {
panic(err)
}
}
}
grid, err := qrencode.Encode(buf.String(), qrencode.ECLevelQ)
if err != nil {
panic(err)
}
grid.TerminalOutput(os.Stdout)
}
$ GOPATH=$PWD go build qrcodegenerator.go
$ ./qrcodegenerator test
// QR CODE IS OUTPUT HERE

Note: the above code is adapted from example code in the README.md file and is therefore LGPL3.

Since Go binaries are static (that’s what I’ve heard at least), you can then move the executable anywhere you like (e.g. ~/bin) and generate QR codes anywhere. Note that they’re pretty huge, i.e. for ‘https://blog.qiqitori.com’ (26 bytes) the QR code’s width will be 62 characters. For e.g. ‘https://blog.qiqitori.com/2020/10/outputting-qr-codes-on-the-terminal/’ (this post) the width is 86 characters.

A simple netcat-based DNS server that returns NXDOMAIN on everything

sudo ncat -i1 -k -c "perl -e 'read(STDIN, \$dns_input, 2); \$dns_id = pack \"a2\", \$dns_input; print \"\$dns_id\x81\x83\x00\x00\x00\x00\x00\x00\x00\x00\";'" -u -vvvvvv -l 127.2.3.4 53
  • A DNS request contains two random bytes at the beginning that have to appear in the first two bytes in the response.
  • The DNS flags for an NXDOMAIN response are 0x81 0x83
  • The rest of the bytes can be 0, which mostly means that we have zero other sections in our response
  • The below example uses nmap-ncat, as found in Red Hat-based distributions, but can also be installed on Debian-based distributions (apt-get install ncat)
  • -i1 causes connections to be discarded after 1 second of idle time (optional)
  • -k means that we can accept more than one connection
  • -c means that whatever we get from the other side of the connection gets piped to a perl process running in a shell process (maybe -e is the same in this case)
  • -u means UDP (leaving this away should work if you do DNS over TCP)
  • -vvvvvv means that we can see what’s happening (optional)
  • -l means that we’re listening rather than sending, on 127.2.3.4, port 53
  • read(STDIN, $dns_input, 2) # read exactly two bytes from STDIN
  • $dns_id = pack “a2”, $dns_input # two bytes of arbitrary random data from $dns_input will be put into $dns_id
  • print “$dns_id\x81\x83\x00\x00\x00\x00\x00\x00\x00\x00” # sends $dns_id, NXDOMAIN, and zeros as described above to the other side
  • Note: I didn’t really test this beyond the proof-of-concept stage. If anything’s iffy, feel free to let me know.

Slow DNS in Docker containers without internet connection

If you’re running a Docker container on a Docker network that should _normally_ have internet access, but doesn’t (for whatever reason, see next paragraph for an example), you might find that DNS lookups in that Docker container will be very, very slow. If the DNS lookup “freezes” your program (prevents your program from serving further requests for a short while, etc.), this can be very inconvenient. (For example, and this is how I noticed the problem: if you’re ssh’ing into a container using dynamic port-forwarding to access other containers, every DNS lookup will freeze your ssh connection.)

In my case, I’m running a (prototype? beta?) test environment that is generally not supposed to connect to the internet to avoid “accidentally” doing silly things in production. However, a few sites have to be whitelisted, and whitelisting has to be done on a DNS basis. If you have similar needs, the solution here might help you. Though it’s hacky.

Diving in

I decided to dive in and see if I can change this behavior at all. Normal DNS failure time:

$ time curl tired.com
curl: (6) Could not resolve host: tired.com; Unknown error

real 0m0.009s
user 0m0.005s
sys 0m0.000s

In a Docker container:

$ time curl tired.com
curl: (6) Could not resolve host: tired.com; Unknown error

real 0m20.599s
user 0m0.010s
sys 0m0.010s

When you don’t have internet access, your host system will in most cases lack a ‘default’ root in the output of ‘ip route’. However, your containers don’t know anything about your host system’s routing tables and your container’s namespace will still have the default route.

Note: Docker uses the host’s /etc/resolv.conf to figure out where to forward DNS requests to, and if /etc/resolv.conf doesn’t specify any servers, Docker will use 8.8.4.4 and/or 8.8.8.8 by default. You can override the default in /etc/docker/daemon.json. (I believe you have to restart Docker after changing the file, sending a kill -HUP will reload some settings specified in the file, but not this one AFAICT). Anyway, the below examples will all show 8.8.4.4 or 8.8.8.8.

When you do curl tired.com in a Docker container, it will send this DNS request to Docker’s internal DNS resolver, as specified in the container‘s /etc/resolv.conf:

nameserver 127.0.0.11

One easy thing we can do is add the following to the container’s /etc/resolv.conf. This speeds things up quite a bit:

options timeout:1 attempts:1
$ time curl tired.com
curl: (6) Could not resolve host: tired.com; Unknown error

real 0m2.541s
user 0m0.010s
sys 0m0.014s

(The following implementation details were gathered from stracing the dockerd process, and might change in the future.) This nameserver runs in the dockerd process, but the dockerd process switches to the container’s network name space before forwarding the request:

# strace -vvvttf -p $dockerd_pid
2124 03:18:52.686952 openat(AT_FDCWD, "/var/run/docker/netns/dd995925297c", O_RDONLY) = 20
2124 03:18:52.686999 setns(20, CLONE_NEWNET) = 0
2124 03:18:52.687058 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 21
2124 03:18:52.687088 setsockopt(21, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
2124 03:18:52.687119 connect(21, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("8.8.8.8")}, 16) = 0

Tangent: here’s one way to ascertain which namespace this is:

# nsenter --net=/var/run/docker/netns/dd995925297c ip a
...
inet 172.18.0.13/16 brd 172.18.255.255 scope global eth1
...

Then you can just issue docker inspect or docker network inspect commands to figure out which container this IP belongs to.

And back: if you do the same UDP connection with the same parameters on the host system, DNS requests fail straight away, because the kernel is sensible enough to notice that there is no route to this host:

08:06:25.289709 socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 3
08:06:25.289858 setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
08:06:25.289981 connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("8.8.8.8")}, 16) = -1 ENETUNREACH (Network is unreachable)

So one thing I thought we could do is to force this connect call to fail. I thought one way might be to get Docker to use TCP to connect to DNS servers. This can be accomplished in /etc/docker/daemon.json like this:

{
"dns-opt": "use-vc"
}

Unfortunately that didn’t help because of the SOCK_NONBLOCK flag. The strace now looks like this:

1580  18:55:02.447528 socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 34
1580 18:55:02.447571 setsockopt(34, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
1580 18:55:02.447612 connect(34, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("8.8.8.8")}, 16) = -1 EINPROGRESS (Operation now in progress)

(Note: adding attempts:1 timeout:1 to dns-opt didn’t seem to have an effect.)

Furthermore, removing the default route doesn’t help either. curl still blocks for ~2.526 seconds. Docker keeps retrying even if it gets ENETUNREACH:

# nsenter --net=/var/run/docker/netns/dd995925297c ip route del default
19:39:44.614836 connect(21, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("8.8.8.8")}, 16) = -1 ENETUNREACH (Network is unreachable)

So up to now we have avoided reading Docker’s source code (it’s written in a weird, foreign language), but at this point we have no choice but to take a look. Perhaps my assumptions were wrong and Docker’s DNS is actually noticing there is a problem, but just retries no matter what. So the first thing we do is enable debug logs just so we can hopefully get some messages that are output close to the code we have to look at it. We just need to set /etc/docker/daemon.json and reload (kill -HUP) dockerd and run journalctl -xeu docker

{
"debug": true
}

Here are some logs we got after we removed our default route:

20-04-05T19:59:29.285616784+09:00" level=warning msg="[resolver] connect failed: dial udp 8.8.8.8:53: connect: network is unreachable"
20-04-05T19:59:29.285646884+09:00" level=warning msg="[resolver] connect failed: dial udp 8.8.4.4:53: connect: network is unreachable"

And here are some logs we got with the default route still existing:

20-04-05T19:30:14.118994138+09:00" level=debug msg="[resolver] read from DNS server failed, read udp 172.19.0.2:40693->8.8.8.8:53: i/o timeout"20-04-05T19:30:17.115909962+09:00" level=debug msg="[resolver] read from DNS server failed, read udp 172.19.0.2:55315->8.8.4.4:53: i/o timeout"

Searching the docker code takes us straight to the ServeDNS function in docker-ce/components/engine/vendor/github.com/docker/libnetwork/resolver.go. (I did a fresh clone today; the newest commit in my git log is 92768b32964e3037e520ab8e74fe190c39f4c83d. The code may look different in your version.)

So we’ve got a long for loop on line 432 that has some breaks and some continues and uses a couple hard-coded constants here and there.

maxExtDNS       = 3 //max number of external servers to try
extIOTimeout = 4 * time.Second
...
for i := 0; i < maxExtDNS; i++ {

One thing we could do is look for a workaround, i.e. anything that will make this function terminate earlier. Unfortunately, I didn’t have any success on that front. So it looks like we currently have no choice but to set up our own DNS server just to prevent Docker from freezing our ssh connections (or whatever software is being frozen in your case).

Setting up our own DNS server

Unfortunately, we can’t just run dnsmasq on the host and make it listen on 127.0.0.1. 127.0.0.1 obviously means something different inside a container than it does on the host. We generally can’t use any of the other IPs the host machine may have either — Docker most likely adds iptables rules that block containers from communicating with those IPs. (Though we could manually delete these iptables rules.)

So one way around this problem is to run a Docker container that runs dnsmasq. Unfortunately, this can get a bit messy — we need to use a static IP on that container, and we need to configure our host to use that IP in its /etc/resolv.conf.

One kind of nice — though very hacky — solution I came up with is to add a network and two containers that pretend to be 8.8.4.4/8.8.8.8 (or one container and two networks).

### fake-dns/Dockerfile
FROM centos
RUN yum -y install dnsmasq
ENTRYPOINT /usr/sbin/dnsmasq --server=/qiqitori.com/1.1.1.1 -R
$ docker build --tag fake-dns fake-dns
$ docker network create fake-dns-net --subnet 8.8.0.0/16
$ docker run -d --network fake-dns-net -it --ip 8.8.4.4 fake-dns
$ docker run -d --network fake-dns-net -it --ip 8.8.8.8 fake-dns

$ docker connect fake-dns-net existing-container

Connecting your already running containers that you need to fix DNS on to fake-dns-net will make DNS requests immediately get NXDOMAIN responses from our fake 8.8.4.4 and 8.8.8.8 servers, and will take care of any freezing issues you may have, all while DNS requests for other container names will still work as normal. (In this example, qiqitori.com is whitelisted and will be forwarded to Cloudflare’s 1.1.1.1 DNS servers. The -R flag on the dnsmasq command means that dnsmasq will ignore any servers listed in /etc/resolv.conf)

Conclusions

In my opinion, Docker’s automatic fallback to 8.8.4.4/8.8.8.8 isn’t the greatest Docker feature to say the least. Perhaps there should be a way to tell Docker not to fall back, and to send NXDOMAIN when it can’t answer anything by itself.

Unfortunately, this “hack” is likely not that future-proof. Future versions of Docker could for example expand or change the list of fallback servers, and this hack would have to be adapted to Docker’s changes. However, explicitly specifying { dns: [“8.8.4.4″, ” 8.8.8.8″] } in /etc/docker/daemon.json could perhaps take care of this problem. (I haven’t actually tested that.)

Bash: how to put command output/file contents on the command line

In certain situations, you may want to have a command that you e.g. saved in a file on your command line before executing it. As I didn’t find an answer straight away, here’s a quick-and-dirty way to do it that executes a command when you press a keyboard shortcut:

bind -x '"\C-g":"READLINE_LINE=$(cat dnsmasq_command)"'

This will put the contents of the file named ‘dnsmasq_command’ on your command line when you press Control-G.

And while we’re at it, while it’s got nothing to do with this article, here’s a binding that puts a time stamp (like 20200408022000, no, not a palindrome) on your command line:

bind -x '"\C-g":"READLINE_LINE=${READLINE_LINE:0:$READLINE_POINT}$(date +%Y%m%d%H%M00)${READLINE_LINE:$READLINE_POINT}; READLINE_POINT=$((($READLINE_POINT+14)))"'

Skype for Business fortunes

Skype for Business:
    Accept any substitute.
    If it's broke, don't fix it.
    If it ain't broke, fix it.
    Form follows malfunction.
    The Cutting Edge of Obsolescence.
    The trailing edge of software technology.
    Armageddon never looked so good.
    Japan's secret weapon.
    You'll envy the dead.
    Making the world safe for competing communication tools.
    Let it get in YOUR way.
    The problem for your problem.
    If it starts working, we'll fix it.  Pronto.
    It could be worse, but it'll take time.
    Simplicity made complex.
    The greatest productivity aid since typhoid.
    Flakey and built to stay that way.
 
One thousand monkeys.  One thousand Windows 8 machines.  One thousand years.
    Skype for Business.
%
Skype for Business:
    It's not how slow you make it.  It's how you make it slow.
    The communication tool preferred by masochists 3 to 1.
    Built to take on the world... and lose!
    Don't try it 'til you've knocked it.
    Power tools for Power Fools.
    Putting new limits on productivity.
    The closer you look, the cruftier we look.
    Design by counterexample.
    A new level of software disintegration.
    No hardware is safe.
    Do your time.
    Rationalization, not realization.
    Old-world software cruftsmanship at its finest.
    Gratuitous incompatibility.
    Your mother.
    THE user interference management system.
    You can't argue with failure.
    You haven't died 'til you've used it.
 
The environment of today... tomorrow!
    Skype for Business.
%
Skype for Business:
    Something you can be ashamed of.
    30% more entropy than the leading communication tool.
    The first fully modular software disaster.
    Rome was destroyed in a day.
    Warn your friends about it.
    Climbing to new depths.  Sinking to new heights.
    An accident that couldn't wait to happen.
    Don't wait for the movie.
    Never use it after a big meal.
    Need we say less?
    Plumbing the depths of human incompetence.
    It'll make your day.
    Don't get frustrated without it.
    Power tools for power losers.
    A software disaster of Biblical proportions.
    Never had it.  Never will.
    The software with no visible means of support.
    More than just a generation behind.
 
Hindenburg.  Titanic.  Edsel.
    Skype for Business.
%
Skype for Business:
    The ultimate bottleneck.
    Flawed beyond belief.
    The only thing you have to fear.
    Somewhere between chaos and insanity.
    On autopilot to oblivion.
    The joke that kills.
    A disgrace you can be proud of.
    A mistake carried out to perfection.
    Belongs more to the problem set than the solution set.
    To err is Skype for Business.
    Ignorance is our most important resource.
    Complex nonsolutions to simple nonproblems.
    Built to fall apart.
    Nullifying centuries of progress.
    Falling to new depths of inefficiency.
    The last thing you need.
    The defacto substandard.
 
Elevating brain damage to an art form.
    Skype for Business.
%
Skype for Business:
    We will dump no core before its time.
    One good crash deserves another.
    A bad idea whose time has come.  And gone.
    We make excuses.
    It didn't even look good on paper.
    You laugh now, but you'll be laughing harder later!
    A new concept in abuser interfaces.
    How can something get so bad, so quickly?
    It could happen to you.
    The art of incompetence.
    You have nothing to lose but your lunch.
    When uselessness just isn't enough.
    More than a mere hindrance.  It's a whole new barrier!
    When you can't afford to be right.
    And you thought we couldn't make it worse.
 
If it works, it isn't Skype for Business.
%
Skype for Business:
    You'd better sit down.
    Don't laugh.  It could be YOUR thesis project.
    Why do it right when you can do it wrong?
    Live the nightmare.
    Our bugs run faster.
    When it absolutely, positively HAS to crash overnight.
    There ARE no rules.
    You'll wish we were kidding.
    Everything you never wanted in a communication tool.  And more.
    Dissatisfaction guaranteed.
    There's got to be a better way.
    The next best thing to keypunching.
    Leave the thrashing to us.
    We wrote the book on core dumps.
    Even your dog won't like it.
    More than enough rope.
    Garbage at your fingertips.
 
Incompatibility.  Shoddiness.  Uselessness.
    Skype for Business.

Adapted from https://lwn.net/Articles/26612/.

Standing on the shoulder of giants

Making new discoveries usually requires knowledge from previous discoveries. Discoveries are published as text in research papers. Scientists find these research papers using keywords they plug into a search engine or when other research papers cite them. Being able to search by keywords has obviously made scientific research much easier than before.

However, the research papers themselves are still written in natural languages, and unless the authors avoid using grammatical features like the words “it” or “them” to avoid repeating nouns over and over again, it’s difficult to extract meaning from them.

So perhaps people shouldn’t be writing research papers in (e.g.) English and instead use something that is easier for computers to parse. The research paper would contain claims, some numeric indicator for how certain these claims are (or from where they came), how these claims came about (e.g. experiment (with instructions for how and experiment was set up and why), logical reasoning (with the actual chain of reasoning), etc.). Then you could, for example, search for claims, instead of just keywords.

Instead of just using the keywords “ocean plastic waste”, you could search for “plastic waste in oceans negatively impacts ocean fauna”. Instead of typing in this query, you would choose every noun, verb and other qualifiers from a searchable menu. You would be able to display this menu in any language. The words in the menu would all mean a specific thing, and anybody would be able to look at the definitions and properties of every word. Eventually, the system could be programmed to (e.g.) look at two different claims and judge whether these claims might be related, based on common subjects or objects, and generate potential new claims all by itself. For example, claim 1: “Thunderstorms may under certain circumstances cause blackouts” and claim 2: “Certain atmospheric conditions cause thunderstorms” would logically mean that “certain atmospheric conditions cause blackouts”, which is true. The system probably shouldn’t let users use the word “certain” though.

How hard would it be to make such a system? Well, it’s best to start in a single niche and slowly expand the system. Here’s a very simple (completely static) example that is a bit different in that it just generates natural English or Japanese text based on what values are selected in the menus: https://blog.qiqitori.com/cve_generator/

Saving the selections would be a first step in the right direction. Currently, only one “allows attackers to:” is allowed, but you would probably want to 1) allow an arbitrary-length chain 2) something indicating the author’s degree of certainty (“may allow”), 3) add other verbs, other nouns, and 4) the reasoning behind a set of these claims (it gets complex here, but in the end you just have to select subjects, verbs, and objects, ifs, etc.).

Perhaps selecting things from a menu would get old quick (unless you just do it for the main claims). So the computer could try its best to analyze a sentence and ask the user to confirm that its interpretation is correct. The preceding sentence could be analyzed like this, but you’ll see that this could be tricky and just selecting something from a menu might be the better option after all: So [potential solution for previous follows:] the computer [the software] could try its best [could potentially (but probably wouldn’t manage to do this perfectly)] to analyze a sentence [analyze a given input sentence] and [afterwards] ask the user to [force the user] confirm that its [the software’s] interpretation [of the given input sentence] is correct [confirm … or force the user to edit the interpretation using a menu]. As you can see, there is a lot of context involved, which makes it hard to derive (e.g.) claims from natural text.

As you can see in the CVE generator, having the facts makes it somewhat easy for the system to generate text in any language for humans to read and think about

irssi/perl memory leak

Some irssi users have reported that their irssi memory usage reaches 1-2 GB after a few months of usage. This time, the cause was a memory leak in Perl: https://rt.perl.org/Public/Bug/Display.html?id=130254

This bug affects Perl 5.24 (which is “current” in e.g. Debian Stable (Stretch)), and if you use certain regexes you will probably see memory leakage.

If you are suspecting a memory leak in irssi, here’s one way to find out more about the nature of your memory leak: dump irssi’s core using gcore. (irssi will be stopped during the dumping process but will carry on where it left off as soon as the dump is completed. If it takes a long time to dump the core, you may time out from some or all servers.) To do this, find irssi’s PID (by e.g. doing ps aux | grep irssi) and then execute:

gcore PID

You’ll get a core file that is as large as irssi’s memory usage at the time of the dump. If you have a memory leak due to the above Perl bug, you will have a lot of strings that start with “Assuming NOT a POSIX class” in your core file, which you can check using the following command:

strings core | grep "Assuming NOT a POSIX class" | wc

If this command outputs large numbers, your memory leak is most likely due to the above-mentioned Perl 5.24 bug. If you don’t get any output, you might still have a memory leak that can be found using the strings command. Either go through the output of the strings command manually and see if you can find any repeated messages, or maybe make use of the following command:

strings core | sort | uniq -c | sort -n -r | less

Let me or the people on #irssi on Freenode know if you find any other leaks.

Decoding Docker’s local-kv.db

Network problems in Docker can often be “fixed” by deleting /var/lib/docker/network/files/local-kv.db. However, in some cases it might be possible to just edit the bits that are wrong. This file is a BoltDB database.

Here’s some code to extract the keys contained in the file (libkv_example.go, heavily inspired by the example code in libkv/docs/examples.md)

package main

import (
    "time"
    "log"

    "github.com/docker/libkv"
    "github.com/docker/libkv/store"
    "github.com/docker/libkv/store/boltdb"
)

func init() {
    // Register boltdb store to libkv
    boltdb.Register()
}

func main() {
    client := "./local-kv.db" // ./ appears to be necessary

    // Initialize a new store
    kv, err := libkv.NewStore(
        store.BOLTDB, // or "boltdb"
        []string{client},
        &store.Config{
            Bucket: "libnetwork",
            ConnectionTimeout: 10*time.Second,
        },
    )
    if err != nil {
        log.Fatalf("Cannot create store: %v", err)
    }

    pair, err := kv.List("docker/network")
    for _, p := range pair {
        println("key:", string(p.Key))
        println("value:", string(p.Value))
    }
}

Make sure to work on a copy of your local-kv.db file, and that you have write permissions to your copy. Also note that this script is anything but thoroughly tested.

If you’re new to go like me, here are the commands to install Go, the required libraries and run the program (if you’re on Debian):

sudo apt-get install golang-1.8-go
PATH=/usr/lib/go-1.8/bin/:$PATH
# feel free to try go build libkv_example.go without the go get commands
# you'll most likely get an error like this:
# libkv_example.go:7:5: cannot find package "github.com/docker/libkv" in any of:
# ...
go get github.com/docker/libkv
go get go.etcd.io/bbolt
go build libkv_example.go
./libkv_example