Android programming: connect to an HTTPS server with self-signed certificate

Comments Off
July 21st, 2014 Open Source, Safe For Seneca

By Andrew Smith

In a previous post I described my frustration with the fact that it’s so difficult to find documentation about how to connect to a server using HTTPS if the certificate for that server is self-signed (not from a paid-for certificate authority).

After a while I found that someone at Google noticed that because of their lack of documentation the common solution is to disable certificate checking at all, which of course nullifies any possible advantage of using HTTPS. So they posted some documentation, and after struggling with it for a while I finally got it to work.

You don’t need to use BouncyCastle, just the stock Android APIs, you should be able to easily customise this code for your own uses:

    /**
     * Set up a connection to littlesvr.ca using HTTPS. An entire function
     * is needed to do this because littlesvr.ca has a self-signed certificate.
     * 
     * The caller of the function would do something like:
     * HttpsURLConnection urlConnection = setUpHttpsConnection("https://littlesvr.ca");
     * InputStream in = urlConnection.getInputStream();
     * And read from that "in" as usual in Java
     * 
     * Based on code from:
     * https://developer.android.com/training/articles/security-ssl.html#SelfSigned
     */
    @SuppressLint("SdCardPath")
    public static HttpsURLConnection setUpHttpsConnection(String urlString)
    {
        try
        {
            // Load CAs from an InputStream
            // (could be from a resource or ByteArrayInputStream or ...)
            CertificateFactory cf = CertificateFactory.getInstance("X.509");
            
            // My CRT file that I put in the assets folder
            // I got this file by following these steps:
            // * Go to https://littlesvr.ca using Firefox
            // * Click the padlock/More/Security/View Certificate/Details/Export
            // * Saved the file as littlesvr.crt (type X.509 Certificate (PEM))
            // The MainActivity.context is declared as:
            // public static Context context;
            // And initialized in MainActivity.onCreate() as:
            // MainActivity.context = getApplicationContext();
            InputStream caInput = new BufferedInputStream(MainActivity.context.getAssets().open("littlesvr.crt"));
            Certificate ca = cf.generateCertificate(caInput);
            System.out.println("ca=" + ((X509Certificate) ca).getSubjectDN());
            
            // Create a KeyStore containing our trusted CAs
            String keyStoreType = KeyStore.getDefaultType();
            KeyStore keyStore = KeyStore.getInstance(keyStoreType);
            keyStore.load(null, null);
            keyStore.setCertificateEntry("ca", ca);
            
            // Create a TrustManager that trusts the CAs in our KeyStore
            String tmfAlgorithm = TrustManagerFactory.getDefaultAlgorithm();
            TrustManagerFactory tmf = TrustManagerFactory.getInstance(tmfAlgorithm);
            tmf.init(keyStore);
            
            // Create an SSLContext that uses our TrustManager
            SSLContext context = SSLContext.getInstance("TLS");
            context.init(null, tmf.getTrustManagers(), null);
            
            // Tell the URLConnection to use a SocketFactory from our SSLContext
            URL url = new URL(urlString);
            HttpsURLConnection urlConnection = (HttpsURLConnection)url.openConnection();
            urlConnection.setSSLSocketFactory(context.getSocketFactory());
            
            return urlConnection;
        }
        catch (Exception ex)
        {
            Log.e(TAG, "Failed to establish SSL connection to server: " + ex.toString());
            return null;
        }
    }

Good luck! And you’re welcome.

Why hate self-signed public key certificates?

Comments Off
July 19th, 2014 Open Source, Safe For Seneca

By Andrew Smith

There are times when the most of the world goes into a frenzied argument for something without thinking it through. This happens with many kinds of issues from (recent news) geopolitical to (since the beginning of time) religious to (what this post is about) technical issues.

Effective means of reason and deduction are forgotten, research is ignored as unnecessary, the only thing that matters is that everybody (except for a couple of loonies) says so – then it must be true.

This doesn’t happen all the time. Often even large, unrelated groups of people can accomplish amazing feats. But sometimes I’m flabbergasted by the stupidity of the masses.

Public key encryption. Everybody uses it, few people understand it. I do. Many many years ago I read a book called Crypto – an excellent introduction to the science, technology, and politics of cryptography. In particular I learned something very important there that applies to this post: how public key cryptography works. I won’t bore you with the details, fundamentally it’s this:

  • There is a key pair – public key and private key.
  • The holder of the public key can encrypt a message. The key is public – that’s anyone – can encrypt a message.
  • Only the holder of the paired private key (private – that’s one person) can decrypt a message encrypted with that public key.

It’s an amazing system made possible by mathematical one-way functions, and it works. Because of it, more than anything else, the current internet was made possible. We would not have had online shopping, banking, services, or anything at all that requires privacy and security on the internet without public key cryptography.

Public key cryptography has one really unfortunate complication – key exchange. You can be sure the holder of the private key is the only one who can read your message that you encrypted with their public key, but how can you be sure you have that person’s public key? What if an impostor slipped you his public key instead?

There are various solutions to this problem, none of them ideal. The most popular one is to have a central authority that certifies the certificates. So if you want to be sure https://www.fsf.org is really the Free Software Foundation and not an impostor – you’ll have to trust their current certificate authority Gandi Standard SSL CA. Who the hell is that? Why should you trust them? Yeah, that’s the problem. The trust is enforced partially by financial benefits and partially by croud-sourced trust: Gandi would lose their business if they were caught issuing fake certificates, that’s all. But it’s the best we’ve got today.

There is one case when a third-party certificate authority is unnecessary, in fact undesired: when I control both ends of the communication. When would that happen? Well it just so happens that I’m currently working on an Android app (my code) which connects to a web service (my code on my server).

I would like to have secure communication between the two. Meaning I need to be sure that either the messages between the two have arrived unmodified and unread by third parties or they will not arrive at all. Perfect use case for pubic key encryption, and I can of course put my own public key in my own Android app to match my own private key on my own server.. right?

No. Or at least not without a great amount of difficulty. Try to find a solution for using a self-signed certificate with the Android (i.e. Apache) DefaultHttpClient or HttpClient. You’ll find a lot of people who will say (with foam at their mouthes) NEVER DO THIS THIS IS INSECURE HORRIBLE TERRIBLE STUPID WHY WOULD YOU EVEN ASK!!!

And it would be ok, if they explained why they think this is the case, but they don’t. Of course not, why bother figuring it out? Everyone else is saying (no, shouting) the same. Must be true.

This is when I start to lose faith in humanity. From “weapons of mass destruction” in Iraq (willingly swallowed by hundreds of millions in the west) to “god has a problem with condoms” (in certain popular religions) to bullshit like this technical problem I’m trying to solve it’s amazing we haven’t blown ourselves up to bits a long time ago.

And it’s not like things like this are rare or are a relic of the unenlightened past, this is happening now and on a global scale. I am dumbfounded, and only mildly optimistic because there are still very many people on this planet who are clearly doing well enough. I just don’t understand how we made it this far.

When (or, I should say “if”) I figure out how to make a connection with HttpClient and a self-signed certificate I’ll post a followup.

Who’s screwing up GTK?

Comments Off
May 10th, 2014 Open Source, Safe For Seneca

By Andrew Smith

For many years I’ve been a fan of GTK. I started using linux with GTK1 was dominant, as I became a developer GTK2 took over, with beautiful themes and very usable widgets. I used GTK software, feeling that the philosophy of the people who write GTK apps matches my own: less fluff and more stability.

Then Gnome went off the rails. Ridiculous decisions like the one-window-per-folder file manager were made with utter disregard for real users’ opinion. Wasn’t very good for developers either, it got so bad that Gnome was entirely removed from Slackware – something I thought might be temporary but turned out to be a permanent decision.

Then GTK3 and Gnome3 came – both with apparently clear intentions but again inexplicable disregard for anyone not sharing their “vision”:

  • Bugs were introduced (probably not on purpose) into GTK2 after GTK3 was released, and those bugs will never be fixed. For example I periodically get bug reports for one of my applications which I’ve traced down to GtkFileChooserButton and it’s a known issue noone will fix in GTK2.
  • Huge parts of GTK2 have been deprecated, for example:
    • The horizontal/vertical Box layout scheme, which is how you were supposed to do all layouts in GTK2, and despite the deprecation warnings from the compiler there has been no alternative layout mechanism identified in the documentation.
    • The entire thread API, which is at the centre of any multi-threaded application. I don’t know if this was replaced with something else or dropped completely.
  • The new library is clearly unfinished. For example the GtkAboutDialog is simply broken in the current version of GTK3.
  • Serious bugs in GTK3 are ignored. For example I spent a day researching why they broke the scrollbars in GTK3, found that it was probably done accidentally (the new functionality doesn’t fit even their own designs), filed a bug, and five months later – still not so much as an acknowledgement that this is a problem.

To be honest I think the Gnome people were always a little too fond of making major experimental changes, but I always felt that GTK itself was a bastion of stability, like the Linux kernel, GCC, Apache, etc. With GTK3 that changed. Who’s running GTK now? I’ve no idea. I don’t know who was running it before either. I don’t know if it’s a leadership problem, a management problem, a financial problem, or even just a lack of technical knowhow (maybe some tech guru[s] left the project).

I spent some time porting one of my programs (Asunder) from GTK2 to GTK3, and the problems I ran into disgusted me so much that I rolled back the “upgrade”. I wasn’t the only one either.

If you have time (45 minutes) I recommend you watch this presentation by Dirk Hohndel at linux.conf.au, who with Linus Torvalds as a partner tried really hard to use GTK for their scuba diving application and eventually gave up in frustration. For me the highlight of the presentation was the comment on the GTK community: essentially noone in the project cares about anything other than their own goals, and their goals are not (as you might expect) to create an awesome toolkit, but rather to enable them to create Gnome3. That’s the only explanation I’ve heard or read that makes sense.

They’re not the only ones either. I’ve run a little unofficial survey of existing software to check how many people moved to GTK3, that was done relatively easily using these commands:

cd /usr/bin && for F in *; do ldd $F 2> /dev/null | grep gtk-x11-2.0 > /dev/null; if [ $? -eq 0 ]; then echo "$F"; fi; done

for F in *; do ldd $F 2> /dev/null | grep gtk-3 > /dev/null; if [ $? -eq 0 ]; then echo “$F”; fi; done

The result? 83 binaries using GTK2, and 68 using GTK3. You can’t read too much into those numbers – at least half of them are parts of XFCE (GTK2) or Gnome/Cinnamon (GTK3) but it’s telling to look at the list rather than the numbers. Essentially noone has moved to GTK3 except the Gnome projects and a couple of others. Hm, I wonder if they wonder why, or care…

Dirk and Linus went on and migrated their application to Qt, and they had a lot of praise for that community. I trust them on the community part, so I decided to consider Qt as a toolkit for my next project. I have, and I wasn’t entirely happy with what I found:

  • Writing C++ in and of itself isn’t a major issue for me, but I dislike overdesigned frameworks and that’s what Qt is.
  • Qt doesn’t use native widgets, and that explains why Qt apps never look native on any platform.
  • In Qt5 (the latest) you have to use JavaScript and QML for the UI, which is a bit too big a jump for me.
  • But it’s supposed to work much better on other platforms (like Windows and OSX), which I believe.

So with GTK3 in limbo for years and the future of Qt unclear – I don’t know what to do for my next app. The only other realistic option I found was wxWidgets, but I fear that’s built on top of GTK on Linux and it will simply inherit all the GTK3 problems. I’ve started creating a project in wxWidgets, but I am wary of investing a lot more time into it until I know how this relationship will work out.

The point of this blog post though was to bash the people currently running GTK, because they deserve it. Shame on you for breaking something that worked.

Slackware penguin sticker

Comments Off
May 9th, 2014 Open Source, Safe For Seneca

By Andrew Smith

I decided to decorate my office a little, and since I’ve always been a Slackware user I wanted to get a large Slackware sticker to put on my glass. I couldn’t find one, so I made it myself, here’s the result:

Office slackware penguin sticker

I had to construct the SVG file myself. I started from the Linux penguin SVG (available somewhere online) and a pipe from open clipart. Thanks to the guys at LinuxQuestions for helping me get started.

To combine the two I had to learn a little bit of Inkscape (the vector graphics editor for Linux), which was neat.

When I was done I took the file to the printer (who, unbelievably, started with “SVG? what is that?”) and finally printed it for me, but with the wrong font. I should have expected that (I teach my students about how the same fonts are not available on different platforms) but I had to ask him to fix it. To make sure it doesn’t happen again I had to convert the text in the SVG file to a path, and I’ve done that by selecting the text in Inkscape and then using the Path/ObjectToPath feature.

Unfortunately somehow I managed to lose the original file, so the one below has the path, not the text, so if you want to change the text you’ll have to start by deleting what’s there:

Slackware Linux SVG

Because this was printed on a white background (the printer either couldn’t or didn’t want to print on a transparent background) I had to chop off the smoke and the shadow underneath, it didn’t look good over glass.

Also the line between the slack and ware turned out much skinnier than what was in the SVG. I wasn’t sure if that was a bug in the Inkscape I used or the Corel Draw the printer used or something else entirely.

Cost me 60$ or something to print a meter-tall sticker, pretty good deal I thought.

How long before you can’t access your files on an Android phone?

Comments Off
May 1st, 2014 Open Source, Safe For Seneca

By Andrew Smith

A couple of months ago I got a Nexus 5 to play with. I was generally impressed with the device, but a couple of things gave me pause, one of them I’ll talk about now: you can no longer set up your phone to act as a USB mass storage device when connected to a computer.

That really bothered me at the time but then I discovered that the “Media device (MTP)” mode allows you access to all the files without the need for special software or dealing with DRM.

And today I remembered my worries. What happened is I installed and ran TitaniumBackup, which normally creates a directory in the SD card named “TitaniumBackup“, which I copy (as a backup) to my computer over USB. But not this time, this time when I connected the phone to the computer all I could find in the root was storage/emulated/legacy/TitaniumBackup – a file and not a directory.

After an hour of trying to figure it out – I did (I just had to reboot the phone). Google does indeed allow you unrestricted access to everything in the SD card over MTP, except there is a bug in their code that won’t let you see some files sometimes.

Reported in October 2012 – that’s a year and a half ago. A large the number of developers are complaining in that bug that Google didn’t as much as acknowledge it, and it’s been manifest for more than one major release.

My theory is the same I was thinking of when I first plugged my Nexus 5 into the computer and didn’t see a block storage device: Google will at some point soon no longer allow you access to the files on your phone or tablet. They’re already making it complicated by hiding your sdcard by default, and now this bug will train the remaining feature users to┬ánot rely on it. Everyone wants everything on the cloud, that’s what everyone’s going to get, and nothing else unless you’re a techie.

Probably as a coincidence (do you believe in those?) as I was struggling with this problem a message came up in my status bar offering to me to set up Google Drive. If that happened a bit later after I had time to digest what happened I would have said “yeah, I saw that one coming” :)

Snowshoe to work

Comments Off
May 1st, 2014 Safe For Seneca

By Andrew Smith

After going to one of our campuses (Newnham) on cross-country skis I decided to give showshoeing a try for my everyday campus (Seneca@York). I enjoyed it so much I did it all winter long.

Video was edited with Cinelerra, though not much editing was needed. I’ll make some notes about the process here because I already forgot what I did the last time a mere two weeks ago:

My camera (a Sony HDR-PJ200) creates MTS files (which apparently are AVCHD – a piece of information that’s useful to know when looking through endless lists of format capabilities). These are 1080i, which is not quite as good as 1080p and this is important to know not only because of the resolution and aspect ratio but also because “i” is for interlaced, and not all programs deinterlace automatically (e.g. VLC).

I cannot use MTS files in Cinelerra. In fact very little software supports them in any fashion. So the first step was to find a format that Cinelerra could use as a source and wouldn’t lose too much of my high quality. Originally I used DV, but I was later annoyed to discover that DV is 3:4 which squeezes my image way too much. I tried a high quality MP4 wiht h264 compression but Cinelerra can’t handle playing that back. Finally I settled on MPEG2 with AC3 audio in an MPG container.

I converted my MTS files to MPG using a very handy graphical tool called WinFF. That is little more than a GUI that will create an ffmpeg command, but if you’ve ever tried to write an ffmpeg command you’ll know how valuable it is. The preset I used in WinFF is called “DVD/NTSC DVD HQ Widescreen”. The resulting files were about 25-30% the size of the originals but the quality was quite good enough.

In Cinelerra I imported the MPG files and edited the video. Then I remembered to set my project settings for 16:9 video (I chose 720×480), thankfully that part didn’t require me to reedit anything.

Finally I rendered the video into:

  • One OGV file straight from Cinelerra
  • One Quicktime for Linux file with two’s complement audio compression and DV video compression, to be used as an intermediary to create:
    • One MP4 file (h264 with AAC)

The final ogv/mp4 files are still rather large (~10MB/min) but I figure that’s ok since only about 3 people a month read my blog :)

FSOSS 2013 Robots Competition

Comments Off
April 16th, 2014 Open Source, Safe For Seneca

By Andrew Smith

Took me a half a year to finish this video. It’s my first exercise in video editing. Enjoy!

It took me this long because:

  • The only program I had any idea how to use was Windows Movie Maker. But I didn’t want to use it for several reasons that should be obvious to you.
  • My wife edits video all the time but she uses a mac. iMovie works for her but I have no desire to become a mac user.
  • I didn’t really want to learn Adobe Premiere or the like because that kind of software costs too much money.
  • Linux is.. as interesting when it comes to video editing as it is in many other ways.

I’ve decided to invest the time into learning Cinelerra. It’s the most serious video editing tool on Linux, has been around for a while, and has a paid supported version which means it’s more likely to survive for a while still.

It was a large investment in time, editing video is very different from any other kind of editing, but hopefully it will pay off longterm.

Ridiculous PayPass/PayWave

Comments Off
March 28th, 2014 Safe For Seneca

By Andrew Smith

Both of my credit cards have been replaced (without any request from me) with new versions which have a wireless, authentication-less, confirmation-less, and protection-less systems called either Mastercard PayPass or Visa PayWave.

I’ve never understood the old american system where your card number alone can be used to take money from you. Yes – it is your money and not the bank’s since the burden of noticing and proving that you weren’t at fault was ultimately your responsibility.

Finally in Canada we got a better system (catching up with the europeans) where (shock!) entering a pin is required to allow someone to take money from your account.

And then we went back an era in security time to a system where your card doesn’t even need to be visible, information is wirelessly read from it and used.. however the reader wants to use it, with some limits like 100$ per transaction. I will dare presume this was done because a typical moron is too lazy to insert a card, type in a pin, and wait for verification.

Not only that, but it turns out that your name, credit card number, and expiry date can apparently be read from your card using a 10$ device. Shockingly stupid.

More shocking? Read through this or this thread. It’s incredible how many people will claim (clearly without thinking it through) that this system is more secure! Trying to understand how they arrive to that conclusion and doing some research I figured it out:

  1. They don’t understand that chip&pin and PayPass/PayWave are unrelated technologies, and they assume that you must have both or else go back to the magnetic stripe. Clearly false, and I know that for a fact because for at least 2 years I had credit cards from both companies that had chip&pin but no radio functionality at all.
  2. They take the bank’s word for “you will not be held responsible for fraudulent transactions”. Really? Have you read a credit card statement recently? How many of the transactions on there can you tell with certainty where they came from? I recall once my card number was used fraudulently (without the PIN of course, why would you require a pin) at York University. I happened to work at Seneca, at the campus shared with York university. It took me a long time to figure out that I really didn’t pay 75$ at the admissions office there, partially because the bank insisted it could have been for something not admissions-related such as parking.
  3. They also parrot the MasterCard and Visa statements that “this technology is extremely secure and the information such as your name and credit card number is useless to thieves”. Aha? Another time when my credit card was misused (again, without a PIN, cause who needs that) someone bought over 1000$ worth of furniture and Caribbean trips from Sears. The bank noticed and I wasn’t held responsible but my card had top be destroyed and I spent about an hour on the phone with them and it took a lot of arithmetic over a couple of statements to confirm that I didn’t get charged for this misuse. Stress on top of stress.

Credit cards generally are a retarded idea. They allow you to spend money you don’t have. Extremely convenient – pay online and anywhere else, interest-free for a month, with no transaction fees, but do you know why is so convenient? It’s because of the incredible number of poor schmucks who end up buying too much stuff with money that’s not their own and end up paying nearly-illegally-large interest fees on it.

In principle I don’t necessarily mind that some dumbass is paying for my convenience, but I do mind when the card makers force an incredibly insecure payment system down my throat.

What can I do about it? Cancel all my cards? You know perfectly well that would mean I would not be able to rent a car or a pair of skis or do a number of other things that really have nothing to do with credit. I have to accept that some otherwise-perfectly-reasonable companies were sold the idea that a credit card should be requirement even when no credit is needed.

So what I’ll probably do is: try to find a good RFID-blocking wallet and use the credit card even less than I’m using it now (i.e. almost never). It will be hard because I’m quite picky about my wallets, they looked like the same leather wallet for the last 20 years, but there are a number of options available and the credit cards aren’t the only RFID concern, so I’ll deal with it.

I guess that won’t be teaching the companies a lesson, that’s exactly what they want (fewer savvy users and more sloppy spenders), but so be it.

[Gas station] Almost ran out of gas

Comments Off
March 28th, 2014 Safe For Seneca

By Andrew Smith

Parry Sound huge truck gas station

Yep, this guys was actually getting gas at a Petro Canada at the highway 400 rest stop near Parry Sound. See the little man on top? He pulled the hose all the way up there to fill it up. Not sure which of the two – truck or station – ran out of gas :)

Setting up Sendmail on a dynamic IP, part3: DKIM

Comments Off
February 4th, 2014 Open Source

By Andrew Smith

Compared to part1 and part2, DKIM took a lot of effort to understand. The concept is relatively simple but the documentation is shit. It doesn’t help that there are at least three implementations of DKIM, though eventually (after some days) I figured out OpenDKIM is the only one that still matters.

In typical open source way the guides on the website are.. the ASCII readme files that come with the distribution.. which I don’t have a problem with in principle but in practice never works well. Plain text is hard to read in a browser. Anyway, that wasn’t the real problem. The real problem is that the documentation is in bizarre order, is definitely not for newbies, and has very significant parts missing. So as I usually do – I wrote my own notes and I’ll publish them here.

Step1: Install OpenDKIM

OpenDKIM doesn’t come with Slackware, and strangely doesn’t even have a slackbuild on slackbuilds.org. So I had to get the source and compile it myself. Not much of a problem since I’m a slackware user, but yes a problem because I’m a linux user.

Turns out the currently latest version of OpenDKIM (2.9.0) doesn’t compile on linux of any kind, because the code is using some BSD-only functions and the configure script didn’t know to replace them with non-BSD versions. I found a bug report and a guy in there claimed (in a backwards kind of way) that the bug will be fixed in the next version, but I didn’t have the time to wait for that so I got the previous version, 2.8.4, which worked.

I normally don’t mind when new software installs into /usr/local, but I hate when it starts to use ridiculous directories like /usr/local/var. After trying to accept that for a day I went back and started over – making sure it’s installed into /usr instead, here are the commands I used:

./configure --prefix=/usr
make
make install
ldconfig

The the rest of the time was spent figuring out why the instructions in the official readme are so retarded.

Step2: Generate keys

For reference: in the rest of this guide I will use my setup as an example. In this setup my SELECTOR is littlesvr-dkim (it can be any random string, though might as well call it your domain name).

First thing you need to do is generate a public/private key pair. This is actually quite easy because opendkim comes with a tool that will do it. The command sequence I used was:

mkdir /etc/dkim
cd /etc/dkim
chmod 700 .
opendkim-genkey -s littlesvr-dkim
ls -al

Make sure that the dkim directory and the files in it (particularly the keys) are only readable by root. If you’re wondering as I did – opendkim-genkey will generate a 1024bit key by default, which should be good enough for a number of years to come.

The command will generate two files, in my case named littlesvr.ca.private and littlesvr-dkim.txt (the latter is the public key in a DNS record format).

Step3: Configure opendkim

This almost doesn’t deserve its own step. First copy the sample config from /usr/share/doc/opendkim/opendkim.conf.simple to /etc/dkim/opendkim.conf and then the only change you have to make inside is the KeyFile path – it was /etc/dkim/littlesvr.ca.private for me.

Step4: Create a startup script

Usually a startup script is something you make when you’re done, but in this case you really want it to begin with, because the command to launch opendkim is complicated, so take your time now to create the script /etc/rc.d/rc.dkim, use this below as a starting point (replace any references to my server littlesvr.ca with your own server, private key path, and selector:

#!/bin/sh
# Start/stop/restart the domain keys identified mail milter daemon.

# Comma-separated (no spaces) list of the domains you want
# opendkim to work with:
MYDOMAINS=littlesvr.ca
# The name with full path of the private key you generated with opendkim-genkey
PRIVATE_KEY=/etc/dkim/littlesvr-dkim.private
# Your SELECTOR:
SELECTOR=littlesvr-dkim

dkim_start() {
   if [ -x /usr/sbin/opendkim ]; then
     echo "Starting domain keys identified mail milter daemon /usr/sbin/opendkim"
     /usr/bin/logger "Starting domain keys identified mail milter daemon /usr/sbin/opendkim"
     /usr/sbin/opendkim   -l -p local:/var/dkim/dkim.sock 
     -d $MYDOMAINS -k $PRIVATE_KEY -s $SELECTOR
   fi
}

# Stop dkim:
dkim_stop() {
   /usr/bin/logger "Stopping domain keys identified mail milter daemon"
   killall opendkim
   if [ -S /var/dkim/dkim.sock ]; then
     rm /var/dkim/dkim.sock
   fi
}

# Restart dkim:
dkim_restart() {
   dkim_stop
   sleep 1
   dkim_start
}

case "$1" in
'start')
   dkim_start
   ;;
'stop')
   dkim_stop
   ;;
'restart')
   dkim_restart
   ;;
*)
   echo "usage $0 start|stop|restart"
esac

You want to make sure that the script has execute permissions and you run it automatically from rc.local. I expect in most setups it doesn’t matter when it starts relative to Sendmail.

Step5: Set up DNS records

If everything worked so far – you can go ahead and publish the appropriate DNS records. It’s safe to do this now because those records won’t be used until your sendmail is reconfigured (next step).

This is probably what the official setup guide is the worst at. Basically it tells you to go and read the RFC – yeah, thanks.
First you have to create a TXT record for _domainkey.whateveryourserver.ca – which in my case is _domainkey.littlesvr.ca. The value of the record should be “o=-” which means all the outgoing mail will be signed (as opposed to some of it).

The second record you have to create will have your actual key. This is another TXT record, this time for SELECTOR._domainkey.whateveryourserver.ca (in my case for littlesvr-dkim._domainkey.littlesvr.ca) with the value from the .txt file generated by oepndkim-genkey. It looked to me like I had to remove some whitespace and cut out everything from that file between the () brackets, but maybe I didn’t have to. What I ended up using as a value is this (wrapped for the blog):

v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC36RsUh9bccxZAy
2NuOr4nfD3+nxXJQVsGCt3iF/pEwZZjkkqzfiGUkfTDMICviDgfcqES
aQg8GPwFd/IKTSErQy09g2XPtvbro3CtAHnarkjTji4RUqiptcdk3H
K83rKdx5hXH0mVITojobn+dsT1+pqToBt4TTQ0CfY2SyiDVQIDAQAB

These records obviously need to be created in your DNS server. If you have your own – you know what to do. In my case – I went to my dynamic DNS provider’s website and created them there. They are static, and don’t need to be updated as my server’s IP changes.

You can test your changes with dig. I ran these two:

dig _domainkey.littlesvr.ca TXT
dig littlesvr-dkim._domainkey.littlesvr.ca TXT

I couldn’t figure out whether opendkim-testkey didn’t work or it was silently telling me that my setup was correct.

Step6: Update sendmail configuration

Very little needs to be done here, most of the work is the typical pain of updating sendmail config files. But if you’ve set up your own sendmail, you’re probably able to remember which .mc file had your (pre-compiled) configuration. Mine was /usr/share/sendmail/cf/cf/sendmail-slackware-tls-sasl-andrew.mc

I edited it and added these lines right after LOCAL_DOMAIN (but probably doesn’t matter where):

dnl# DKIM Support (added 4 feb 2014)
INPUT_MAIL_FILTER(`dkim',`S=local:/var/dkim/dkim.sock')dnl

Yours will be the same assuming you configured opendkim with the same prefix as I recommended.

To update the actual configuration I ran ./Build sendmail-slackware-tls-sasl-andrew.mc (in that dir) and copied the built sendmail-slackware-tls-sasl-andrew.cf overwriting /etc/mail/sendmail.cf

Before restarting sendmail – double check your DNS settings, and that opendkim is running.

Step7: Test

Once your DNS records are set and your Sendmail is passing emails through OpenDKIM – you’re ready to test it, and you should test it as soon as you can, because if something didn’t work right – you may end up with sent mails that won’t arrive at their destination and you won’t get a bounce message to know it happened.

Luckily testing is quick and easy – just send an email to check-auth@verifier.port25.com (the same service I used in part2).

If you’ve done the same stuff I have in these three blog posts – you should be in awesome shape spam-wise! I should have done this years ago, if only I had a guide like this I would have :)