Thursday, September 12, 2013

Sniffing the USB for answers

Previously I mentioned that part of what helped tracking down the problems with the arduino uno attach was sniffing the USB to see what the bus transactions were doing.  I thought I would write a quick piece as to how I did this.

The first bit was buying a logic analyser.  I looked at getting a dedicated USB analyser but they tend to be a bit on the pricey side and, besides, they are very specialised - I don't expect I will be doing too much USB sniffing so buying a special device just to sort out a particular bug was hard to justify.  Since USB uses standard logic level signals I could get away with capturing the logic levels and using some software to decode what that translated to in terms of USB data.  The added bonus is that I would have a tool for decoding other protocols such as SPI, one-wire, I2C, serial and so forth.  There are lots of options for buying a logic analyser and the price range varies widely depending on the capabilities of the analyser.  For me, I needed something that was fast enough to sample a full speed USB (that runs at 12MHz) and at least two channels, preferably more.

There are a lot of logic analysers based on the Cypress FX2 chips, these guys are just dumb samplers that feed the data back to a host PC over USB - all the triggering and data storage is done on the host.  There were cheaper options but I settled on a salea logic because it was housed in a decent case, had a nice set of probes, a carry case along with software that seemed decent - you can download and run the software without the analyser hardware to try it out which is good.  The salea gave me 8 inputs and just enough bandwidth to sample a full speed USB.

The next step was the software.  The salea came with software which is functional and has some decoding facilities but, unfortunately, those facilities don't extend to decoding USB transactions.  To decode the USB transactions I used the sigrok open source software to perform the decodes.  Sigrok has firmware for the salae logic device, the firmware is loaded when the device is opened (i.e. it is not stored in flash on the device, it is downloaded every time into RAM) so there is no harm in using their firmware.  I ended up not using this feature, because my NetBSD machine was the device under test I used another laptop running Windows and the salea software to perform the data captures.  The captures were exported as VCD (value change dump) files which I copied to my NetBSD machine and ran sigrok-cli  on the data.

To perform the captures I needed to break out the USB signals so I could attach the logic analyser.  To do this I bought a short USB extension cable, cut it in half and soldered the wires to a small piece of veroboard.  I soldered a bit of single line header to the veroboard so each wire in the cable was brought out to a header pin that I could connect the logic analyser probes to.  I wasn't going to bring out the +5V rail but when I was making up the board I forgot and did all four wires.. oh well.  The probe wires from the salae plug neatly onto the header which makes the arrangement safe as well as physically well connected.  Here is a snap of what the breakout board looks like with the probes attached:

Fortunately for me, the cable manufacturer followed the USB hardware specification and used the specified colours for the wires so it was easy to pick the correct wires for probing.  The spec says white is the D- signal, green is the D+ signal and black is ground.  So I connected probe 0 (black) to the white wire and probe 1 (brown) to the green wire.  As a side note - astute readers may have noticed that the probe wire colours correspond to the resistor colour code which makes it easy to know what probe number you are dealing with - ground is sleeved to and labelled to distinguish it from the probes.  Once the probes were connected, I plugged the extension cable into my NetBSD machine, hooked up the salea to the windows machine and started the sampling software.  I configured a trigger to start sampling on the falling edge of D+ as this transition will happen when a device is plugged in and to sample at 24MHz.  I then connected the arduino to the other end of the extension cable.  The analyser triggered and started saving samples.  I let the attach process for the arduino complete, then stopped the capture and exported the sample data as a VCD file.

Analysing the data was a challenge.  The sigrok software is capable of doing it but, to be blunt, their documentation is rather poor.  I ended up on their IRC channel to ask for help.  The guys there were friendly and guided me through what I needed to do.  Once you understand what they are doing it makes sense, the sigrok software uses layers of decoders to analyse the data stream.  For usb they start with a layer called usb_signalling that takes the raw data samples and converts them into USB data.  The next layer is called usb_protocol which takes the output from usb_signalling and interprets the data bytes as USB protocol information.  All the decoders are written in python, it is quite easy to modify existing decoders or write your own.  The command line I used for the decoding was:

sigrok-cli --rate 1000m -i netbsd.vcd -I vcd:compress -P usb_signalling:dm=0:dp=1,usb_protocol

Note that the --rate option is one of my own making, there is a bug currently in sigrok-cli in that it fails to determine the sample rate from a VCD file and will crash. I added --rate to allow me to set the sample rate, this is a hack only, the sigrok developers are looking at a more elegant solution. Also, the sample rate is 1GHz because that is the sample rate set in the VCD. The next argument is the input file (netbsd.vcd) and then the input format (vcd) with the compress flag set to speed up the file reading. Then the -P option specifies the protocol decoders - first usb_signalling which has two arguments that define which probe numbers were connected to what signals - dm is D- and, as above, this was probe 0, similarly dp is D+ and was probe 1. The next decoder layer is usb_protocol which performs the protocol decode. On the machines I have the decode was very slow, it took almost half an hour of churning before sigrok-cli started spitting out any decoded data and took many hours to decode about 30 seconds of sampled data. The end result was output that looks like this:

usb_protocol: "SOF 1781" 
usb_protocol: "SETUP DEV 0 EP 0" 
usb_protocol: "DATA0 00 05 02 00 00 00 00 00" 
usb_protocol: "ACK " 
usb_protocol: "IN DEV 0 EP 0" 
usb_protocol: "NAK " 
usb_protocol: "SOF 1782" 
usb_protocol: "IN DEV 0 EP 0" 
usb_protocol: "DATA1 ()" 
usb_protocol: "ACK " 
usb_protocol: "SOF 1783" 
usb_protocol: "SOF 1784" 
usb_protocol: "SOF 1785" 
usb_protocol: "SOF 1786" 

This actually shows the SetAddress USB request that NetBSD was sending to the device, not only that but shows the problem that we were having - the "IN" command there is the status request for the SetAddress.  The first try results in a "NAK" - negative acknowledgement, the device was too busy to answer, the usb controller retries the IN again and gets the status back - unfortunately by this time it seems that the atmel USB microcode is confused and things don't work right from then on.  What we observed with windows and linux is that the "IN" for the status was delayed until the next start-of-frame (SOF) which gave the device time to process the request before getting hit by a demand for status.  Modifying the NetBSD driver to do the same made things work.

The decoding was not always perfect, I think because the data rate of the USB was right at the limit of the sample rate for the logic analyser (remember Mr. Nyquist says you need a sample rate of at least twice your highest frequency to properly sample data) so sometimes I would see decode errors but, fortunately, the decode mostly worked right and provided us with valuable clues.

Friday, September 06, 2013

Arduino uno/Freetronics etherten attaches now

It took a lot of thrashing about and false trails but we now have the aduino uno (plus others) attaching and also the atmel processors will attach in DFU mode (used for flashing the boot loaders) whereas before they would fail with an address error.  I had to resort to sniffing the USB transactions using a salae logic and decoding with sigrok-cli.  What this showed was the uhci driver was was feeding the control request bus transactions in too fast.  By changing how the transactions were fed and also disallowing what is known as "short" transactions for the control requests.  After fixing these things the attach worked fine.  Many thanks to fellow NetBSD developer Nick who waded in, helped with suggestions and formulated the final working fix.  Hopefully a proper fix will hit NetBSD-current soon.

Sunday, August 04, 2013

Another Red Hat Cert

Just last week I sat and passed the dreaded RHS333 Enterprise Network Security Services exams.  The instructor I had said that he only knew of one person who passed these exams first sitting, that was himself.  Now he knows two...  Not that this matters now, Red Hat are deprecating this certification in favour of a new one - that doesn't invalidate what I have done but going forward there will be a new certification to take the place of RHS333.  Regardless, I thought I would put down my thoughts on how to prepare for these exams as it does apply to pretty much all the Red Hat certifications just in case it helps someone else.

Firstly, I set up a lab environment which was just an extension of what I had described in Linux lab in the lap posting.  I just added a couple of new networks but made them routed instead of NAT type networks because it seemed like cross network traffic was being NATed with the IP address of the bridge interface which messed up the IP address based access control exercises.  On one network (the "good" network) I kickstarted a couple of machines to perform the exercises on, on the other network (the "bad" network) I kickstarted another machine to use as the "bad guy" in the lab exercises.  Then I would just run through the lab exercises until I could do them without even thinking, rebuilding all the machines before going over the labs again so I would have clean machines each time.

Here are a few more tips:

  • If yu have the book from the associated course then do all the exercises in the labs and do them well, make sure you understand what you are doing.  Don't skip any because anything that is covered in that book can be in the exam.
  • If a lab does not work then find out why, this is very important.  If you screw up you need to know why so either you can immediately recognise the error in the exam and fix it or just avoid making the error in the first place.  I have seen someone screw up a lab exercise and went on to do the same error in the exam - they failed because of this.
  • You don't have to memorise everything.  One of my instructors said, "this is an open man exam" meaning that you have all the man pages and documentation that is part of the standard RHEL distribution during the exam.  You can install whatever you like on the exam machines.  So, knowing where to look for information instead of rote memorisation is a valid tactic.  A lot of the time the samples provided in the man pages, sample configuration files or under /usr/share/doc are enough to get you going during the exam.  Just cut and paste from the documentation into your config file and adjust to suit the exam requirements.  So make it part of the exam preparation to hunt down where the trickier configuration documentation is kept so you can immediately bring it up during the exam.  Going hunting during the exam wastes precious time.
  • If you are having problems with something during the exam that other things do not rely on then just move on.  You can always revisit the question later, perhaps you will have worked out what to do or have time to spare to fiddle around.  By moving on you may able to pick up extra marks on things you can get working instead of beating your head against a wall fruitlessly
  • Do the chkconfig on immediately after installing or starting any new service that has been asked for.  If you are asked to provide a service then it must come up on reboot - ensure it does.  Same with selinux settings and other system settings - make sure they are permanent from the get-go, saves having to rework later.  Same with firewall settings, if iptables is running then add the appropriate rules for the service as you go - I usually just edit /etc/sysconfig/iptables directly and reload the service, saves typing because you can copy another existing line (though, command line history would be just the same...)
  • Make sure you are careful with file modes and ownership - I must admit to being sloppy with this during the exam and it does cost me
  • Leave enough time for a couple of reboots and carefully check everything that has been asked for still works after the reboot.  I have found that after drilling on the lab exercises I have plenty of time for this, in all the exams I have faced after starting really working on the lab exercises in my own time I have found that I can easily complete the entire exam with plenty of time to spare.
Hopefully this stuff will make a difference for someone else...

Friday, July 12, 2013

NetBSD, sendmail and smtp auth

Firstly, I do like having a MTA on my laptop. It permits things like send-pr to work and allows me to queue up mails while I am off network and have them relay when a network appears. For better or worse, the default MTA that ships with recent versions of NetBSD is postfix.

I don't really like postfix much, I really prefer sendmail. I have used sendmail from back in the days when you had to hand craft your, the m4 configuration stuff was not around. I am used to the rich debug options you get with sendmail, something I sorely miss with postfix. Being that postfix is the default MTA on NetBSD, I thought I would at least give it a chance and I have been using it for quite some time but a few days ago it did what I think is an unforgivable sin. Postfix refused to relay mail. It did so for no clear reason. Yes, initially there seemed to be a problem with my stunnel that securely tunnels a connection to my ISP smtps server but that resolved itself and yet mail still sat in the queue. A reboot (not that I rebooted due to that, my laptop gets rebooted a lot for other reasons) and also a restart of postfix did not move the mail. I had to tell postfix to push the mail out... that just sucks in my books, the MTA should try and flush the queue if there are pending messages when it starts up. So, time to bring sendmail back. This is how I configured sendmail to use the stunnel tunnel to my ISPs secure mail server and perform a smtp auth which my ISP requires.

Install stunnel from pkgsrc and then edit /usr/pkg/etc/stunnel/stunnel.conf. Add:

accept = 5000
connect = securemail.server.your.isp:smtps

setup stunnel to start at boot by adding an entry to the /etc/rc.conf and put a copy of /usr/pkg/share/examples/rc.d/stunnel into /etc/rc.d. Now stunnel should start on boot (actually, I have a dhclient hook script that restarts stunnel when I get an new IP). That should be it, if you start up stunnel then you should be able to test the connection by doing:

telnet localhost 5000

If all is well then telnet should connect and open a session, you should be able to see a smtp greeting. This is all you really need to do though it does make you vulnerable to a man in the middle attack where someone could manipulate your DNS to make stunnel think it is connecting to the secure mail server but is really connecting to the attackers host. Your ISP may provide a digital certificate that you can use to validate you are connecting to the correct machine (mine does). Put the certificate into /usr/pkg/etc/stunnel/certs.pem. My ISP only provided their certificate, I had to download the root certificate bundle from the certificate issuers web site myself. You can find out the issuer by running:

openssl x509 -in /usr/pkg/etc/stunnel/certs.pem -noout -issuer

Once you have the issuers certificate bundle, just append the certificates to the end of the certs.pem file (assuming all the certs are in PEM format of course). Then tell stunnel to start using the certificates by adding this to the stunnel.conf:

verify = 3
CAfile = /usr/pkg/etc/stunnel/certs.pem

Setting verify to 3 tells stunnel to verify the peer with a locally installed certificate. Restart stunnel and perform the telnet test again, you should again get a successful connection to your ISPs securemail server. If you don't then you have something broken, there is a debug option in the stunnel.conf if you set this on then a stunnel.log will be written into the location that stunnel is chroot'ed to. By default that is /var/chroot/stunnel/. If you see:

CERT: Verification error: unable to get local issuer certificate

Then your issuer certificate is either missing or incorrect.

Once stunnel configured and tested, it is time to move onto sendmail. Sendmail is available in pkgsrc, I normally build my own packages instead of installing binary ones. These are the steps I followed. To build sendmail with SASL support add this to /etc/mk.conf:

add PKG_OPTIONS.sendmail=   sasl

make and install sendmail. Copy the sendmail and smmsp in /usr/pkg/share/examples/rc.d/ to /etc/rc.d.
Also install cy2-plain otherwise you will get AUTH=client, available mechanisms do not fulfill requirements in the maillog if you try and use PLAIN for smtp authentication. I am guessing that if a different authentication scheme is used then you should install the matching cy2 package to get support for the scheme.

With sendmail installed we next turn to the sendmail config. Firstly create a file in /usr/pkg/share/sendmail/cf/. I normally use the convention of where machinename is the hostname of the machine the sendmail configuration is for. The file had the contents:

VERSIONID(`@(#) $Revision: $')
FEATURE(`authinfo',`hash /home/sendmail/client-info')dnl
define(`SMART_HOST', [])dnl
define(`RELAY_MAILER_ARGS', `TCP $h 5000')dnl
define(`ESMTP_MAILER_ARGS', `TCP $h 5000')dnl
define(`confCHECKPOINT_INTERVAL', 10)dnl
define(`confMESSAGE_TIMEOUT', 3d/4h)dnl

Usually people put the file for authinfo under /etc/mail, I have put mine in /home/sendmail mainly because my /home partition is encrypted using CGD so, without my CGD passphrase, people cannot steal my authentication details even if they have physical access to the machine. Note also that the number in the RELAY_MAILER_ARGS and ESMTP_MAILER_ARGS (in my case 5000) must match the number defined on the accept line in the stunnel.conf file.

Create the sendmail configuration file using:


Copy the resulting to /etc/mail/ If this is a first time install of sendmail you may need to make and install the file into /etc/mail too, the is in the same directory as the custom configuration file source.

The above sendmail configuration expects a sendmail hash database in /home/sendmail/client-info, to create this we create a file /home/sendmail/client-info with the contents:

AuthInfo: "U:username@your.isp.domain" "I:username@your.isp.domain" "P:sekret!"

Since we have stunnel listening on port 5000 on localhost all we need to define is an entry for localhost. The U and I parameters are for authentication and authorisation, usually they are both the same and most places seem to use your email address. It is best to check these details with your mail provider. The P field is the password for the account.
After the client-info file is created, change directory to /home/sendmail (or where-ever you placed the file) and run:

makemap hash client-info < client-info

To create the hash database. Make sure the directory and all the files in it are only readable by root.

Next we need to update the /etc/mailer.conf to tell NetBSD to use sendmail instead of the default postfix. Just run this command:

ln -fs /usr/pkg/share/examples/sendmail/mailer.conf /etc/mailer.conf

Finally, edit /etc/rc.conf and add:


To turn off postfix and start sendmail at boot. At this point you should be able to shut down postfix, start sendmail and test sending emails.

Sunday, May 26, 2013

It's been a while since the last post... I am still struggling with getting the arduino attached properly.  It seems to be something to do with the handling of the SetAddress command.  If I hack the kernel code so that it does not expect a status back from the arduino in response to the command then the serial port attaches and works correctly.  The problem is that the LUFA code looks absolutely fine and any changes I make break the attach on windows.  I am still digging away at this so see if I can get something working.  I don't think adding a quirk that says "don't expect status on SetAddress is really a good solution".

Last week was spent was spent fighting with crusty old software as I moved it from Solaris 2.5.1 machines onto a shiny new T4-1 running solaris 10.  The first problem I faced wasn't a crusty one, just a bug with Solaris 10, I configured up samba to write out a kerberos keytab for me with the view to using the keytab to authenticate users against an Active Directory domain using kerberos.  I had done this on a test machine and it worked fine but on the T4 I was getting an error saying that my kerberos password was wrong when I tried to do a "net ads join".  The stupid thing was I could get a kerberos ticket fine using kinit.  After a bit of thrashing about I tried a samba package that I had built myself instead of the system one and found that the join worked fine with exactly the same samba configuration file.  A trawl of the Oracle support site turned up patch 119757, version 25 fixes a bug "samba net ads join on T4 fails", sure enough, applying that patch fixed the join problem.

Then it was onto the crusty old software, all so old the people who sold it have disappeared off the face of the earth or just won't acknowledge the software ever existed.  The old hardware was E420R/E280R vintage equipment, well past retirement.  The old machines had some flexlm licence servers running on them.  I set up a couple of zones with the right hostid's on them and copied all the files over.  Then the fun began.  First, the licence managers complained about "Error:  No license for this platform." After a lot of searching around I found this is because the licence server is checking the HW_PROVIDER using systeminfo(), Oracle changed this from Sun_Microsystems to Oracle Corporation a while back, the licence server is expectng the old value and barfs when it doesn't see it.  I ended up using dtrace to check the execname for the binary callig systeminfo and returning "Sun_Microsystems" when the licence server tried doing the call.  That eliminated the platform error.  Unfortunately, that wasn't the end of it.  The licence server then started bitching that it could not create a lock file and that a vendor daemon was already running but this was a lie.  After a bunch more searching I found that the vendor daemon was performing a getdents on /, if the inode for . and .. were not the same number then it would barf.  Since I was running these programs in a zone the inode numbers did not match.  Again, dtrace to the rescue, to intercept the getdents call and if the licence binary had made the call, modify the return so that the inodes for . and .. were the same.  Once this was done the licence servers started up and ran fine.  Most of the crusty old software that relied on the licences worked ok, just one application consistently core dumped when running on Solaris 10.  Due to needing to keep a hardware emulator running which required a sbus card + associated driver we had to keep an old SparcStation 5 running 2.5.1 (this was diskless booted from the new machine) so the misbehaving application was able to be run, albeit slowly, from there.

All this would have been so much harder without dtrace and an internet to trawl for ideas...

Friday, January 25, 2013

Building Arduino firmware on NetBSD

I needed to rebuild the firmware for the atmega8u2 used as a usb to serial bridge on an arduino UNO clone.  Unfortunately, at the time I was doing this the version of avr-gcc in NetBSD pkgsrc did not support the atmega8u2, even the one in pkgsrc Work In Progress (pkgsrc-wip) was not sufficient.  This meant I had to manually build the toolchain myself to get a version of avr-gcc with the atmega8u2 support.  Fortunately, the process for performing the build is straightforward.  These are the steps I used.

Firstly, I made a working directory for all the various components I needed, I also selected the path /usr/local/avr as a destination for all the resultant binaries and libraries.  I did this to keep all the avr related stuff together instead of being splattered around /usr/local which is the default install location.

I downloaded and unpacked the latest binutils sources, changed directory into the bin utils, in there I created a build directory:

tar zxf binutils-2.23.tar.gz
cd binutils-2.23
mkdir build
cd build

Then configured the binutils build with the correct prefix and target, built and installed the binutils:

../configure --prefix=/usr/local/avr --target=avr
gmake install

Next, I downloaded and unpacked the latest gmp library sources:

tar xjf gmp-5.0.2.tar.bz2
cd gmp-5.0.2

Configured it, build and install:

./configure --prefix=/usr/local/avr
gmake check
gmake install

Then the mpfr library sources, unpack:

tar xjf mpfr-3.1.1.tar.bz2
cd mpfr-3.1.1

This build needs to reference libraries and includes I had installed in the previous steps so before going further  I needed to let the compiler and linker know where the dependencies were:

export CFLAGS="-I/usr/local/avr/include"
export LDFLAGS="-L/usr/local/avr/lib -Wl,-rpath -Wl,/usr/local/avr/lib"

Once this was done I could configure, build and install:

./configure --prefix=/usr/local/avr
gmake install

The mpc library was next, download, unpack, configure, build and install:

tar zxf mpc-1.0.1.tar.gz
cd mpc-1.0.1
./configure --prefix=/usr/local/avr
gmake install

Once all the gcc dependencies were built, I could build the compiler itself.  Again, download, unpack, created a build directory, configure, build and install:

tar jxf gcc-4.7.2.tar.bz2
cd gcc-4.7.2
mkdir build
cd build
../configure --prefix=/usr/local/avr --target=avr --with-gcc --with-gnu-as --enable-languages=c,c++ --disable-nls --disable-libssp --with-dwarf2
gmake install

Then I built the AVR c library, for this the avr-gcc compiler was required so I had to add the appropriate bin directory to the PATH:

export PATH

tar jxf avr-libc-1.8.0.tar.bz2
cd avr-libc-1.8.0
mkdir build
cd build
../configure --prefix=/usr/local/avr --build=`../config.guess` --host=avr
gmake install

Once all the toolchain was in place, rebuilding the firmware was just a matter of following the instructions in the arduino firmware readme file and then using gnu-make to build the firmware.  I had to fix a few things the compiler complained about, all of them seemed to be gcc being more strict about writable strings being defined in read-only memory but once these were fixed the firmware built without problem.  I had to resort to Windows and flip to load the firmware onto the board, the new firmware booted up and ran fine.  I was then able to load modified versions of the firmware to try and debug the problem of this particular arduino board not attaching properly under NetBSD.  More on this later.