Spreadsheets vs. Command Line Utilities vs. SQL (for Pivot Tables)

When processing text files on Linux, you have a lot of choice. Sed, Awk, Perl, or just coreutils, or perhaps a spreadsheet application? I’m a reasonably educated spreadsheet application user, but I’m also a reasonably educated command line user and a reasonably educated SQL user. This article pits the three against each other.

In this article, I’ll show different ways to process a large CSV file: one solution using a spreadsheet application, one solution using standard CLI utilities (GNU coreutils GNU datamash), and one solution using q (http://harelba.github.io/q – Run SQL directly on CSV files) (and one solution using sqlite3, which is almost the same).

Conclusions

Yes, I’m putting my conclusions first. If you need to create a Pivot Table from CSV files, I believe SQL is the best solution. The q utility makes using SQL very comfortable.

The data

The dataset used in this article describes export statistics, i.e., trade from Japan to other countries. We would like to do a simple Pivot Table-like task that would be really easy in Excel: find the total export volume (in JPY) from Japan to a specific country for every HS “section”. Here are some examples of HS sections and their corresponding HS chapters:

Chapters 01-05: LIVE ANIMALS; ANIMAL PRODUCTS
Chapters 06-14: VEGETABLE PRODUCTS
Chapter 15: ANIMAL OR VEGETABLE FATS AND OILS AND THEIR CLEAVAGE PRODUCTS; PREPARED EDIBLE FATS; ANIMAL OR VEGETABLE WAXES

The following links are for import, but that doesn’t matter in our case I think. Here’s the whole table: http://www.customs.go.jp/english/tariff/2018_4/index.htm This table contains links to tables further describing the HS codes in each HS chapter. For example, here’s the table for section I, “LIVE ANIMALS; ANIMAL PRODUCTS”, chapter 01: “Live animals”: http://www.customs.go.jp/english/tariff/2018_4/data/e_01.htm.

The HS codes in our dataset look like this: ‘010121000’; the first two digits correspond to the HS chapter, which is all we are going to look at for now. We have to group by these two digits.

The files

I downloaded all the CSV files on this page: https://www.e-stat.go.jp/stat-search/files?page=1&layout=datalist&toukei=00350300&tstat=000001013141&cycle=1&year=20170&month=24101212&tclass1=000001013180&tclass2=000001013181&result_back=1 (English) and merged them into a single file, data.csv like this:

head -n 1 ik-100h2017e001.csv > header
tail -q -n +2 ik-100h2017e0*csv >> header

The HS chapters/sections are described here: http://www.customs.go.jp/english/tariff/2018_4/index.htm (English. A Japanese page is available too, of course.)

The country codes are listed here: http://www.customs.go.jp/toukei/sankou/code/country_e.htm (English. Japanese is available.)

data.csv.gz
countries.csv
hs_sections.csv
hs_chapters_to_sections.csv
hs_sections_no_to_descriptions.csv

The spreadsheet solution

I won’t go into much detail here. First of all, we add worksheets for all of the above files (or reference external files). Then we add a column to compute the first two digits in the HS codes, using a function like MID(C2,2,2). We use VLOOKUP() to look up the HS section. (Perhaps we use another VLOOKUP() for the country codes.) Then we create a pivot table. (It would be more efficient to VLOOKUP() from the pivot table, but while I believe that to be possible in Excel, I’m not sure it’s possible in OpenOffice/LibreOffice.)

Anyway, using spreadsheets is rather user-friendly, but large files take quite a while to process. Adding extra columns to the original data is very inconvenient too. (Using calculated fields in Excel may help with this.)

The CLI/GNU(?) solution

We are going to make use of GNU datamash here. GNU datamash is capable of grouping and summing, which is already halfway there. For the lookups, we use the join command(!), which is part of coreutils.

We need to do some minor pre-processing, as we do not want the header rows in this solution:

tail -n +1 data.csv > data_nh.csv
tail -n +1 countries.csv > countries_nh.csv
tail -n +1 hs_sections.csv > hs_sections_nh.csv

The other files do not have any headers. So far so good, but using common CLI tools gets a bit awkward in the next step, cutting off characters in the middle of the HS code field. Let’s isolate that field:

$ cut -d, -f3 data_nh.csv | head -n3
'010121000'
'010121000'
'010121000'

Then we cut off the unneeded characters:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | head -n3
01
01
01

Then we need to re-add the other columns. This is one of the slightly awkward steps when doing this using CLI tools. Let’s isolate the other relevant columns first though:

$ cut -d, -f4,9 data_nh.csv | head -n3
103,2100
105,1800
205,84220

To paste these two columns back onto the first isolated columns, we use the aptly(?) named paste command. The -d option allows use to combine fields using the comma operator. (Default is tab.) We’ll pass the HS section as standard input, and the other two relevant columns using bash’s <().

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | head -n3
01,103,2100
01,105,1800
01,205,84220

What we have now is a trimmed CSV that goes “HS section”,”Country Code”,”Amount”.

The “VLOOKUP” part is slightly tricky. We are going to use the little-known join command, which is included in coreutils. Some HS sections and some country names have commas in them, which are a bit inconvenient, but not a huge problem as the result of the “VLOOKUP” is attached to the right of the entire original data.

Here’s a quick demonstration of the join command. (Note: countries_nh.csv is pre-sorted. Everything passed to join must be sorted.)

$ echo 222 | join -t, - countries_nh.csv
222,"Finland"

In Excel, we are able to safely group by cells that may contain commas, but not so in datamash. I left out something above: We’ve got the HS chapter code above, but from this chapter code, we wanted to look up the HS section, and group by that section. So let’s go back one step and use join to get us the HS section number from the HS chapter number. Note that all join input must be sorted, so before we add a pipe to sort on the newly created fourth field:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 | join -t, - hs_chapters_to_sections.csv | head -n3
00,103,276850736,0
00,105,721488020,0
00,106,258320777,0

Getting the sort command to sort correctly by a single field isn’t very easy. If it weren’t for the –debug option that is! In this case we want to sort by the first field, so the command becomes ‘sort -n -t, -k1,1’. (Start field == end field == 1, so -k1,1.) Debug output looks like this:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 --debug | head
sort: using ‘en_US.UTF-8’ sorting rules
00,103,276850736
__
________________
00,105,721488020
__
________________
00,106,258320777
__
________________

The field that has been sorted gets underlined. Great! Now let’s do the pivoting part using datamash:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 | join -t, - hs_chapters_to_sections.csv | datamash -s -t, groupby 2,1,4 sum 3
103,00,276850736
103,01,418085
103,03,14476769

The -s option sorts, which is required when using groupby. The -t option selects ‘,’ as the delimiter. This command groups by column 2 (country), and then by column 4 (our HS section number), and computes a sum of column 3 for this grouping. So this is it! If we know the country codes and HS sections by heart, that is.

Well, our above output from datamash starts with the country code, and things are nice and sorted, so we just have to join again:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 | join -t, - hs_chapters_to_sections.csv | datamash -s -t, groupby 2,4 sum 3 | join -t, - countries_nh.csv | head -n3
103,0,276850736,"Republic of Korea"
103,1,15325799,"Republic of Korea"
103,10,50079044,"Republic of Korea"

Next we would like to look up the HS section number to get the HS section description. In the above commands, we joined on the first field, but fortunately join supports joining on different fields. We need to sort on the second field and then tell join to join on the second field, which can be accomplished by using the -1 option and specifying 2 . (So -1 2 or simply -12, though that may look confusing.)

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 | join -t, - hs_chapters_to_sections.csv | datamash -s -t, groupby 2,4 sum 3 | join -t, - countries_nh.csv | sort -n -t, -k2,2 | join -12 -t, - hs_sections_no_to_descriptions.csv | head -n3
00,103,276850736,"Republic of Korea","Unknown"
00,105,721488020,"People's Republic of China","Unknown"
00,106,258320777,"Taiwan","Unknown"

That’s it! To get e.g. Finland, we’ll make it easy for ourselves and just grep for Finland:

$ cut -d, -f3 data_nh.csv | cut -c 2-3 | paste -d, - <(cut -d, -f4,9 data_nh.csv) | sort -n -t, -k1,1 | join -t, - hs_chapters_to_sections.csv | datamash -s -t, groupby 2,4 sum 3 | join -t, - countries_nh.csv | sort -n -t, -k2,2 | join -12 -t, - hs_sections_no_to_descriptions.csv | grep Finland
0,222,1758315,"Finland","Unknown"
2,222,10613,"Finland","VEGETABLE PRODUCTS"
3,222,654,"Finland","ANIMAL OR VEGETABLE FATS AND OILS AND THEIR CLEAVAGE PRODUCTS; PREPARED EDIBLE FATS; ANIMAL OR VEGETABLE WAXES"
4,222,45021,"Finland","PREPARED FOODSTUFFS; BEVERAGES, SPIRITS AND VINEGAR; TOBACCO AND MANUFACTURED TOBACCO SUBSTITUTES"
5,222,33611,"Finland","MINERAL PRODUCTS"
6,222,2624353,"Finland","PRODUCTS OF THE CHEMICAL OR ALLIED INDUSTRIES"
7,222,4880410,"Finland","PLASTICS AND ARTICLES THEREOF; RUBBER AND ARTICLES THEREOF"
8,222,12557,"Finland","RAW HIDES AND SKINS, LEATHER, FURSKINS AND ARTICLES THEREOF; ADDLERY AND HARNESS; TRAVEL GOODS, HANDBAGS AND SIMILAR CONTAINERS; ARTICLES OF ANIMAL GUT (OTHER THAN SILK-WORM GUT)"
9,222,3766,"Finland","WOOD AND ARTICLES OF WOOD; WOOD CHARCOAL; CORK AND ARTICLES OF CORK; MANUFACTURES OF STRAW, OF ESPARTO OR OF OTHER PLAITING MATERIALS; BASKETWARE AND WICKERWORK"
10,222,38476,"Finland","PULP OF WOOD OR OF OTHER FIBROUS CELLULOSIC MATERIAL; RECOVERED (WASTE AND SCRAP) PAPER OR PAPERBOARD; PAPER AND PAPERBOARD AND ARTICLES THEREOF"
11,222,527084,"Finland","TEXTILES AND TEXTILE ARTICLES"
12,222,1541,"Finland","FOOTWEAR, HEADGEAR, UMBRELLAS, SUN UMBRELLAS, WALKING-STICKS, SEAT-STICKS, WHIPS, RIDING-CROPS AND PARTS THEREOF; PREPARED FEATHERS AND ARTICLES MADE THEREWITH; ARTIFICIAL FLOWERS; ARTICLES OF HUMAN HAIR"
13,222,991508,"Finland","ARTICLES OF STONE, PLASTER, CEMENT, ASBESTOS, MICA OR SIMILAR MATERIALS; CERAMIC PRODUCTS; GLASS AND GLASSWARE"
14,222,5757,"Finland","NATURAL OR CULTURED PEARLS, PRECIOUS OR SEMI-PRECIOUS STONES, PRECIOUS METALS, METALS CLAD WITH PRECIOUS METAL AND ARTICLES THEREOF; IMITATION JEWELLERY; COIN"
15,222,971561,"Finland","BASE METALS AND ARTICLES OF BASE METAL"
16,222,14614308,"Finland","MACHINERY AND MECHANICAL APPLIANCES; ELECTRICAL EQUIPMENT; PARTS THEREOF; SOUND RECORDERS AND REPRODUCERS, TELEVISION IMAGE AND SOUND RECORDERS AND REPRODUCERS, AND PARTS AND ACCESSORIES OF SUCH ARTICLES"
17,222,13427653,"Finland","VEHICLES, AIRCRAFT, VESSELS AND ASSOCIATED TRANSPORT EQUIPMENT"
18,222,4062385,"Finland","OPTICAL, PHOTOGRAPHIC, CINEMATOGRAPHIC, MEASURING, CHECKING, PRECISION, MEDICAL OR SURGICAL INSTRUMENTS AND APPARATUS; CLOCKS AND WATCHES; MUSICAL INSTRUMENTS; PARTS AND ACCESSORIES THEREOF"
19,222,4550,"Finland","ARMS AND AMMUNITION; PARTS AND ACCESSORIES THEREOF"
20,222,399367,"Finland","MISCELLANEOUS MANUFACTURED ARTICLES"
21,222,20539,"Finland","WORKS OF ART, COLLECTORS' PIECES AND ANTIQUES"

If you need to match column names exactly, you could replace the above grep by something like this:

... | grep -P '^.*?,.*?,.*?,"Finland"

Though personally I’d maybe use awk for that. On my machine, executing this takes 0.675 seconds. That’s pretty fast!

The q solution

q is a tool that allows you to perform SQL queries on CSV files from the comfort of the command line. If you know SQL, that should be pretty cool!

We’ve got some issues with digits and hyphens in the column names, so we first pre-process to get rid of those:

$ head -n 1 data.csv | tr '1-9' 'A-I' | sed 's/-//g'; tail -n +2 data.csv

This uses tr to replace the digits 1-9 with corresponding letters and sed to get rid of hyphens. We pipe this into q. The q command itself looks like this:

$ q -d, -H 'select Description, sum(ValueYear) from - JOIN hs_sections.csv ON substr(HS,2,2)=Number JOIN countries.csv ON Country=CountryID where CountryName="Finland" group by Description'

-d specifies the delimiter, -H specifies the presence of a header row (and we can use these header names in the query!) and the rest is just SQL.

$ time (head -n 1 data.csv | tr '1-9' 'A-I' | sed 's/-//g'; tail -n +2 data.csv) | q -d, -H 'select Description, sum(ValueYear) from - JOIN hs_sections.csv ON substr(HS,2,2)=Number JOIN countries.csv ON Country=CountryID where CountryName="Finland" group by Description'
ANIMAL OR VEGETABLE FATS AND OILS AND THEIR CLEAVAGE PRODUCTS; PREPARED EDIBLE FATS; ANIMAL OR VEGETABLE WAXES,654
ARMS AND AMMUNITION; PARTS AND ACCESSORIES THEREOF,4550
"ARTICLES OF STONE, PLASTER, CEMENT, ASBESTOS, MICA OR SIMILAR MATERIALS; CERAMIC PRODUCTS; GLASS AND GLASSWARE",991508
...
real    0m12.590s
user    0m12.364s
sys     0m0.236s

Wow, this was pretty pleasant, but it took my machine 12.59 seconds to get here. q uses sqlite, and we can use the -S option to save the resulting sqlite database to a file. Here’s an sqlite3 command that executes the same query on a saved database:

time sqlite3 data.sqlite 'select Description, sum(QuantityAYear) from `-` JOIN `hs_sections.csv` ON substr(HS,2,2)=Number JOIN `countries.csv` ON Country=CountryID where CountryName="Finland" group by Description;' 
ANIMAL OR VEGETABLE FATS AND OILS AND THEIR CLEAVAGE PRODUCTS; PREPARED EDIBLE FATS; ANIMAL OR VEGETABLE WAXES|0
ARMS AND AMMUNITION; PARTS AND ACCESSORIES THEREOF|0
ARTICLES OF STONE, PLASTER, CEMENT, ASBESTOS, MICA OR SIMILAR MATERIALS; CERAMIC PRODUCTS; GLASS AND GLASSWARE|7436
...
real    0m0.218s
user    0m0.192s
sys     0m0.024s

As you can see, that is pretty fast. So it spends quite a bit of time importing the CSV file. Well, as it turns out, sqlite3 supports importing from CSV files as well. Here’s a command that creates a database in test.sqlite, imports the three required CSV files, and runs the query:

$ time (sqlite3 -csv test.sqlite '.import data.csv data'; sqlite3 -csv test.sqlite '.import hs_sections.csv hs_sections'; sqlite3 -csv test.sqlite '.import countries.csv countries'; sqlite3 -csv test.sqlite 'select Description, sum(`Quantity1-Year`) from data JOIN hs_sections ON substr(HS,2,2)=Number JOIN countries ON Country=CountryID where CountryName="Finland" group by Description; ')
"ANIMAL OR VEGETABLE FATS AND OILS AND THEIR CLEAVAGE PRODUCTS; PREPARED EDIBLE FATS; ANIMAL OR VEGETABLE WAXES",0
"ARMS AND AMMUNITION; PARTS AND ACCESSORIES THEREOF",0
"ARTICLES OF STONE, PLASTER, CEMENT, ASBESTOS, MICA OR SIMILAR MATERIALS; CERAMIC PRODUCTS; GLASS AND GLASSWARE",267696
...
real    0m2.581s
user    0m2.372s
sys     0m0.120s

This is much faster than using q, but may have various limitations. Note that if you feed sqlite3 commands through standard input, you can do all this in a single sqlite3 session, and the database can be entirely in-memory.

Leave a Reply

Your email address will not be published. Required fields are marked *