Goodbye Dryspell – Hello Raspberry Pi + Wiegand + RFID

•December 2, 2013 • 16 Comments

Yeah.. I can’t believe its been 2 years since my last blog post… I knew it has been a while… But 2 years?  How life has changed.  But nevermind that.  I’m here to write about something a wee-bit more relevant.

Over my hiatus, I’ve dabbled in a number of projects, but probably the most recent, and largest of those has been the purchase of a new house with a new garage that is slowly being converted to a small motion picture production studio.  I’ll get into those details later… maybe.  In this effort, I have been working on a security system for said studio, that would allow those who us it (more than myself) to have unrestricted, but secure access to the building.  So, I’ve been working on a Raspberry Pi-based, RFID access system.  You know, the basics:  RFID reader/keypad, electric door strike, and a Raspberry Pi for brains, PiCamera for… Nevermind, that’s classified.

I’ve been building this over the last few weeks, working on various aspects of the software (SDL, OpenCV, etc), but recently I started working on the interface between the Wiegand RFID/Keypad and the Pi.  There are a few articles and posts about this out there, but so far all of the code to read the Wiegand protocol that I could find, was kind of hit-and-miss.  Probably, most likely to do with the timing variances on the Pi, when not running a real-time kernel.  So, of course, I whipped up a new library.  I’ve debated putting code somewhere, but its literally, less than 150 lines of code, including the example app.  Maybe I’ll change my mind, but until then, I wanted a place to post the code.

 * Wiegand API Raspberry Pi
 * By Kyle Mallory
 * 12/01/2013
 * Based on previous code by Daniel Smith ( and Ben Kent (
 * Depends on the wiringPi library by Gordon Henterson:
 * The Wiegand interface has two data lines, DATA0 and DATA1.  These lines are normall held
 * high at 5V.  When a 0 is sent, DATA0 drops to 0V for a few us.  When a 1 is sent, DATA1 drops
 * to 0V for a few us. There are a few ms between the pulses.
 *   *************
 *   * IMPORTANT *
 *   *************
 *   The Raspberry Pi GPIO pins are 3.3V, NOT 5V. Please take appropriate precautions to bring the
 *   5V Data 0 and Data 1 voltges down. I used a 330 ohm resistor and 3V3 Zenner diode for each
#include <stdio.h>
#include <stdlib.h>
#include <wiringPi.h>
#include <time.h>
#include <unistd.h>
#include <memory.h>

#define D0_PIN 0
#define D1_PIN 1

#define WIEGANDTIMEOUT 3000000

static unsigned char __wiegandData[WIEGANDMAXDATA];    // can capture upto 32 bytes of data -- FIXME: Make this dynamically allocated in init?
static unsigned long __wiegandBitCount;            // number of bits currently captured
static struct timespec __wiegandBitTime;        // timestamp of the last bit received (used for timeouts)

void data0Pulse(void) {
    if (__wiegandBitCount / 8 < WIEGANDMAXDATA) {
        __wiegandData[__wiegandBitCount / 8] <<= 1;
    clock_gettime(CLOCK_MONOTONIC, &__wiegandBitTime);

void data1Pulse(void) {
    if (__wiegandBitCount / 8 < WIEGANDMAXDATA) {
        __wiegandData[__wiegandBitCount / 8] <<= 1;
        __wiegandData[__wiegandBitCount / 8] |= 1;
    clock_gettime(CLOCK_MONOTONIC, &__wiegandBitTime);

int wiegandInit(int d0pin, int d1pin) {
    // Setup wiringPi
    wiringPiSetup() ;
    pinMode(d0pin, INPUT);
    pinMode(d1pin, INPUT);

    wiringPiISR(d0pin, INT_EDGE_FALLING, data0Pulse);
    wiringPiISR(d1pin, INT_EDGE_FALLING, data1Pulse);

void wiegandReset() {
    memset((void *)__wiegandData, 0, WIEGANDMAXDATA);
    __wiegandBitCount = 0;

int wiegandGetPendingBitCount() {
    struct timespec now, delta;
    clock_gettime(CLOCK_MONOTONIC, &now);
    delta.tv_sec = now.tv_sec - __wiegandBitTime.tv_sec;
    delta.tv_nsec = now.tv_nsec - __wiegandBitTime.tv_nsec;

    if ((delta.tv_sec > 1) || (delta.tv_nsec > WIEGANDTIMEOUT))
        return __wiegandBitCount;

    return 0;

 * wiegandReadData is a simple, non-blocking method to retrieve the last code
 * processed by the API.
 * data : is a pointer to a block of memory where the decoded data will be stored.
 * dataMaxLen : is the maximum number of -bytes- that can be read and stored in data.
 * Result : returns the number of -bits- in the current message, 0 if there is no
 * data available to be read, or -1 if there was an error.
 * Notes : this function clears the read data when called. On subsequent calls,
 * without subsequent data, this will return 0.
int wiegandReadData(void* data, int dataMaxLen) {
    if (wiegandGetPendingBitCount() > 0) {
        int bitCount = __wiegandBitCount;
        int byteCount = (__wiegandBitCount / 8) + 1;
        memcpy(data, (void *)__wiegandData, ((byteCount > dataMaxLen) ? dataMaxLen : byteCount));

        return bitCount;
    return 0;

void printCharAsBinary(unsigned char ch) {
    int i;
    for (i = 0; i < 8; i++) {
        printf("%d", (ch & 0x80) ? 1 : 0);
        ch <<= 1;

void main(void) {
    int i;

    wiegandInit(D0_PIN, D1_PIN);

    while(1) {
        int bitLen = wiegandGetPendingBitCount();
        if (bitLen == 0) {
        } else {
            char data[100];
            bitLen = wiegandReadData((void *)data, 100);
            int bytes = bitLen / 8 + 1;
            printf("Read %d bits (%d bytes): ", bitLen, bytes);
            for (i = 0; i < bytes; i++)
                printf("%02X", (int)data[i]);
            printf(" : ");
            for (i = 0; i < bytes; i++)

This is linked with -lpthread -lwiringPi -lrt

One of they key improvements in my code is the use of the high-resolution clock kernel module to handle the timeout after data is read.  In the pidoorman code, this was based on a countdown loop, which would vary depending on system load, and whether your Pi was overclocked.  Wiegand is a very tolerant protocol, and so there should be few if any issues with timings on the pulses themselves.  My code also utilizes the new ISR API from wiringPi.


Oh. My. God. San Francisco Rush: Extreme Racing, and the eMaker:Huxley

•November 10, 2011 • 7 Comments

No, I’m not building replacement parts for SF Rush with my eMaker:Huxley… They just both happen to be recent acquisitions. Sadly, I must admit that the SF Rush has been taking the majority of my time, despite my desire to actually finish building the Huxley. The SF Rush generates more at’a-boy points from the familial unit. In my 6yo son’s words, “The motors are cool, but the rest is just boring.. let’s race instead!”

But that’s another story. First, let me back up and say… Oh. My. God. I had nearly forgotten that I had this thing called a blog. I think my last blog post was made a year ago… honestly, I could check to know for sure, but that’s not really the point. So what brings me back, dusting off the old wp-admin interface? Shit, I already let that cat out of the bag… Yeah, new toys.

As I was thinking about it this evening, I thought, “I should blog the restoration of my SF Rush.. that would be cool. Oh, wait.. already done.” then I thought, “I should blog the building of the Huxley. Naw, if my son thinks actually building it is boring, I can’t imagine the dry eyes to be had from reading about building it. Besides… already done.”

In fact, that singular thought that brought me back to my blog. More hacking. I don’t think I saw my blog being used to this end, but it seems popular in that regard. Who am I to complain.

So, what I am I hacking this time? Well, the SF Rush of course. So, just to be clear, for any poor soul who is unfortunate enough not to understand what “SF Rush” is. It’s probably one of the best coin-operated “driving simulators” to ever hit you local FLYNN’S. My wife and I have a couple of games that we bought just after getting married, and had always wanted a Rush. We would play it at the movie theater endlessly.. well, at least until our movie was about to start. When I managed to score one on eBay for a whopping $1.25 + ~$300 in shipping and handling, it was too good to pass up. Anyway, I digress. Short answer: San Francisco Rush: Extreme Racing is a lime-green, full-sized, sit-down, arcade game in which you race against 7 other cars through the streets of San Francisco. That game was built by Atari and released in 1997, and offered “realistic” physics, and sported force-feedback in the steering wheel, along with a 4.1 sound-track and surround/quadro-phonic audio. The game also supported “linking” 7 additional cabinets together to allow you to race with your friends, or participate in a tournament. Good times.

Anyway, so, onto the hacking. There are a number of mods, such as replacing PROMs and hard-drives with CF cards, and upgraded firmwares, etc. But that was all to “run of the mill”.” In my cleaning of a cabinet that had 15 years of “17 year-old” all of it, I noticed the link cable that interconnects multiple game cabinets together was standard Cat-5. A slightly closer investigation revealed that the interconnect was standard Ethernet. Oh joy!

While the ultimate plan is to get at least one more of these machines, so we can race our friends in the basement before heading in to watch a flick in the theater room, I had the thought: “I wonder what the packets look like…” Within a moment, I’d installed WireShark on my laptop, plugged in the Cat-5, and was capturing packets.

It was pretty boring, but it confirmed that the data is in fact standard Ethernet 2. Nothing quite as elegant as IP, but it was better than Token-Ring, or some proprietary serial-based protocol that just happened to be pushed through Cat-5.

So, now what? For starters I was thinking that it might be fun to record Time Trials, and then allow you race yourself. Not unlike Ghosts, except that you actually can crash into yourself. I also thought it might be interesting to look into creating advanced AIs for the other cars. That’s getting a bit more involved, since it would require more detailed knowledge of the track layout, etc, but I could certainly start with one of my own races, and applying some genetic algorithms to “evolve” my own driving.

So, anyway, I’ll try and do my best to post progress and keep my motivation, given the Huxley sitting next to me on the table, unfinished. Of course, if you stumble across my page, and find this idea interesting and worth its merit, drop me a line, or comment– it might just be the motivation I need.

UPDATE:: Well, after plucking around till way-to-late last night, I’ve decided to change my direction with this little hack, in the short term… While I was starting to make some progress talking back to the machine, I quickly realised that I would be severely restricted in my reverse-engineering of the protocols, without having a second machine on hand.

What occurred to me though, was that I could write an app that would let me proxy the Ethernet II packets over the Internet to another, remote Rush game, thus allowing multiple people to race against each other, without being in the same room.

If by some chance, someone with another Rush: ER game comes across this, and is interested in helping out in testing, please, drop a comment.

Next update:  After talking with others, hacking around, and doing a ton of investigation, here is what I’ve found so far:  One area that had me puzzled in my Wiresharking was that various packets with the (seemingly) same ID/Type would have different lengths, and different values at similar offsets.  While possible, it didn’t jive well, and I was getting really frustrated, until…

While digging around last night I actually found a field that seems to have a direct correlation to the packet length.  With that, I’ve found the following structures:

All Packets:  Basic “SF Rush” Header.  Includes a packet Id, Timestamp, and a couple of [fixed?] fields.
All Broadcast Packets:  Message Type (0x1815), Length, Game ID, Car/Console ID, handful of unknown fields.
— Various Sub-“Broadcast” messages: each has a corresponding state, 0x0-0x5, 0x9, and 0xA (identified so far).
— 0x00 : appears to be broadcast once at start of “attract mode”
— 0x01 : appears to be broadcast once at console startup
— 0x09 : ping at 3-second intervals when not racing (during “attract” and “game setup”)
— 0x02 : appears to be broadcast at start of “game setup”
— 0x05 : appears to be broadcast after selecting track
— 0x03 : appears to be broadcast after selecting car, drones/force-feeback options
— 0x04 : appears to be broadcast after selecting auto/manual
— 0x0A : appears to be broadcast prior to start of race

Still trying to pick out the various fields in the packet.  Unfortunately, I didn’t get much time to try and correspond difference game setup options with the packets.  I think I’m going to write up a “diff” function in my app, that will highlight packet data values that have changed, from previous packets.

I’m trying hard not to write a decoder for Wireshark.  That seems overkills.  But I’ve had to tell myself that more than once now…

I also followed up with some other machine collectors at the Arcade-Museum.  Apparently a few people have used OpenVPN to connect remote machines over the Internet, with success.  Of course, one of my concerns was simply finding enough people to actually coordinate and play with, and that was realized by those early attempts of others.  It also appears that the majority of machine owners have upgraded their systems to SFR: The Rock.  I seem to be in the minority with an actual SFR:Extreme Racing system.  I may track down the chips and do the upgrade… More tracks! Woot!

New update:  I just recently purchased a second RUSH.  I have to make a run out to Lake Tahoe to pick it up, but it should make for some much easier wire-sharking!

DavMail to The Rescue

•June 14, 2011 • 1 Comment

I tell my kids all the time, “Hate is a very strong word.. we don’t use it.. we can dislike something, but we should never hate it.” Of course, I turn around and profess my hate for Exchange. Hate really is a strong word, and its probably still not appropriate for me to use it.. but I will. I HATE Exchange Server.

I find Exchange repulsive on so many levels, but I think the real reason for my dislike simply boils down to not being able to easily use Desktop Linux solutions to take advantage of all the shit I’m required to use/deal with in the office. Shared Calendars, Address Books, etc. Linux has all of these things, but few OSS solutions work with the ever present Exchange server.

Forced to “find a solution” (not my words), I ran across DavMail, a handy little Java app that seems to be a brilliant solution to a very troubling problem.

Basically, DavMail talks to any Exchange server, and converts all the Exchange requests into open-standard protocols, such as CalDav, IMAP, CardDav, LDAP exchange. Configuration is minimal (I only needed to specify the URL to my Exchange server), and then configure my email/calendar client for IMAP/CalDav/etc.

I personally use Thunderbird with the Lightning plugin for Calendar. In all of the simplicity of it, the hardest part was figuring out the various URLs for calendars and address books.

In my case, calendar URLs looks like this:

Address book connect info looks like this:

To find/subscribe to a shared calendar, lookup in the address book the name of the calendar. It should have an email address associated with it. Use that email address when adding a calendar.

Oh, Graphene

•June 7, 2011 • 5 Comments

I’ve been really obsessed with graphene lately.. You know, graphene? It’s that not-so-newish carbon nano-mesh that a couple of researchers realised (not discovered) a couple of years ago, and won the 2010 Nobel Prize for Physics with? If you still don’t know what I’m talking about, then just skip the rest of this post.


Okay, so graphene.. we’re on the same page now, right? Anyway, so I’ve been obsessing about it lately. Well, lately being the last 12 months, or so. I really believe that graphene will change the face of technology for the foreseeable future, much more so than plastics did in the 60’s.

That said, I started subscribing to RSS feeds on graphene papers, and investigating companies doing graphene research. I started buying small piles of stock in graphite mines, particularly the ones that have deposits of high-purity graphite, and just recently started looking into companies that are filing graphene-based patents. It’s starting to become so much of a mass of information that I read each day, that I need a way to distil what I find down to the gems, and in a way that I can remember them. Of course, the best way to do that, is to write it down. So, why not write it down for others to share, and encourage others to share their own findings too.

I’m no physicist, but I understand some more-than-basic concepts, and I’m always eager to learn more. That said, while a lot of what I find tends to be very technical research, I’m actually looking for more practical and “understandable” information on graphene developments.

So, please, send me your comments about cool graphene discoveries, and the individuals and companies behind them. If you are a researcher yourself, I would love to hear about the work your doing (if you are at liberty to discuss it).

To start things off, just today, I found this old interview with one of the Nobel prize-winning researchers, which I think summarises graphene, as well as an brief, yet disturbing glimpse into the mentalities of big business.

Stupid Rant

•May 4, 2011 • Leave a Comment

This is just a very stupid rant, but it annoys the shit out of me on regular occasion.

Why do consumer electronics manufacturers believe that the right-hand side of a device is the best location for a headphone connector? And why do more and more consumer headphones have the cable coming out of the right ear?

If you are a right-handed person, like (insert statistic here), shouldn’t, like wearing a wrist-watch, you put these kinds of things on the LEFT side, so that they don’t get in your way, and interfere with your primary task?

What do you think? Am I the only one who feels this way?

Blu-ray Movie Authoring in Linux

•October 17, 2010 • 48 Comments
IMPORTANT: Blu-ray Authoring in Linux needs your help! It has come to my attention that UDF 2.5 support, which is required for authoring Blu-ray content is currently not supported at a writeable filesystem in Linux. The current UDF tools (discussed below) haven’t been maintained in years, and appear to be a dead project. In order to make Linux Blu-ray authoring a reality, we need to revive this project (udftools), and continue the development of UDF 2.5 and UDF 2.6 support. But!!! All is not lost. There are workable solutions, that range from commercial (yet still accessible) software, to hoping that your Blu-ray player doesn’t care about UDF 2.5 support any more than you do… like mine did.

In the wake of producing ‘Reco‘ this summer, I’ve taken on the responsibility of DVD and Blu-ray authoring for the film. DVD authoring on Linux has mostly evolved to the point of being within the reach of the average user, with a number of solutions for DVD authoring. Blu-ray on the other hand is a whole different story. Blu-ray authoring can be a very complicated process, especially when you get into menus and such. Lucky for me, we are only producing Blu-ray discs for exhibition as festivals, which means we don’t care about fancy menus. Just put the disc in, and go…

This post will chronicle my attempts at BD authoring towards that goal, keeping the entire workflow limited to OFS, and Linux as the authoring platform.

So far, I found these useful tools…

ffmpeg (to encode the streams, of course!)
tsMuxeR (to mux the streams and generate the BD folder structure)
mkudfiso (to generate an ISO of the BD folder structure)
udftools (to generate the UDF image, this is in the Ubuntu repos)
dvd+rw-tools (to actually burn the ISO to a BD-R disc).

First things first… export your movie at the highest quality possible. Spare no expense here, just keep it in a format that FFMPEG will like. In my case, I export from Piranha using the Quicktime plugin, which supports HUFFYUV (lossless), and 24bit, 48k audio.

Then using ffmpeg, we have to convert this to H.264. ffmpeg uses libx264, which has some nice presets to make things easy. For me, quality if everything, so time isn’t an issue. I’ll 2-pass, please. I made a little shell script to actually do the encoding for me.

ffmpeg -i $1 -f rawvideo -an -pass 1 -vcodec libx264 -vpre slowfirstpass -b $3 -bt $3 -threads 0 -passlogfile $2.log -y /dev/null
ffmpeg -i $1 -an -pass 2 -vcodec libx264 -vpre hq -b $3 -bt $3 -threads 0 -passlogfile $2.log $2.h264
ffmpeg -i $1 -acodec pcm_s24le $2.wav

I invoke it like this:

$ bluray-movie 36M

The first parameter is the input Quicktime that I created from Piranha. The second parameter is the output filename, minus extension, for the resulting H.264 stream and WAV file. The third parameter is the encoding bitrate, in this case 36Mb/s. Blu-ray supports allows upto 54Mb/s total, so this should give me a very high-quality encoding, but still afford me the bandwidth for the audio, and still not push everything to the max.

With this done, we should have two files, a raw H.264 stream, the corresponding 24bit WAV file.

Addendum, courtesy of Dkottmair (and edited courtesy of me):

In your x264-settings, the x264 guys recommend very specific settings for Bluray (or AVCHD) in order to be fully compatible on any player. For example there is one chipset used in players that cannot do weighted p-frames (–weightp option in x264), even though it should according to the standard (yes, I also have no idea how they got this chip approved then, but thanks to this shitty chip all Bluray- or AVCHD-H.264 now cannot use weighted pframes!). I’ve seen two players during my try-outs that seemed to have this chip, one was a Sony, can’t remember the other one. It causes a lot of block artefacts.

These are the settings they recommend for x264: –preset slow –tune film –weightp 0 –nal-hrd vbr –vbv-maxrate 15000 –vbv-bufsize 15000 –aud –keyint 24 –bframes 3 –slices 4 –level 4.1 –b-pyramid strict

You can add bitrate control via -b or -crf, and also add reference-frames: -r 6 for 720p and -r 4 for 1080p. Also, –vbv-maxrate should be 40000 and –vbv-bufsize 30000 for Bluray, 15000 is for AVCHD – but works on Bluray, too.

“This is all true to spec”, Dkottmair says, having done “a lot of research on the subject, even before x264 got BD-certified! ;-)” Also, it seems, a new site was put up specifically dealing with X264 and Bluray encoding. Check it out:

Here’s the full three commands Dkottmair uses (third one needs to be launched in a separate terminal, sending it to the background using & will NOT work!):

mkfifo stream.y4m
mplayer -vo yuv4mpeg:file=stream.y4m -nosound (source-moviefile)

(separate terminal, same directory):

x264 –crf 22 –preset slow –tune film –weightp 0 –nal-hrd vbr –vbv-maxrate 15000 –vbv-bufsize 15000 –aud –keyint 24 –bframes 3 –slices 4 –level 4.1 –b-pyramid strict -o (output.264) stream.y4m

Dkottmair says: “Remember to add/change those parameters mentioned above as needed! And delete that y4m file afterwards, though it won’t eat much of your harddrive, since it always remains at 0 bytes! ;-)”

As a final tidbit, keep in mind that the y4m/x264 encoding won’t handle your audio. For real film/movie geeks, you won’t blink at this, but if you’re trying to burn home movies, you’ll still need to render out a seperate WAV file, which can still be done with the last ffmpeg command above.

Fire-up tsMuxerGUI, and import the files. Check the Blu-Ray radio in the output section, and specify a folder to generate the BD folder structure in. Double check the options under the “Blu-Ray” tab, for chapter creation, etc. When you’re ready, hit the “Start Muxing” button at the bottom of the window.

Before we dig into UDF creation in the next step, you should read this little comment, courtesy of Dkottmair:

“However, there’s one major issue with your usage of udftools: UDF 2.5. The problem is: UDFtools simply cannot create UDF 2.5 (or 2.6), which is a requirement for BD (or AVCHD) according to the standard. The maximum it does is UDF 2.01.

“This is also the reason why Brian gets that error on the PS3, it’s exactly what you see when the disc is not UDF 2.5, I know it, I’ve tried several times to burn working Blurays/AVCHDs with mkudffs-based OSS-tools such as Brasero or K3b! So when your discs play in your player, it is quite likely merely because your player doesn’t care what UDF-version the disc is in – Most players are built to be as compatible as possible, not to adhere 100% to the standard. Just like many DVD-players play DVDs that you burn in ISO instead of UDF…

“The only program I know (i also use it in my article) that can burn UDF 2.5 under Linux is Nero 4, which btw. works like a charm!) I’ve been bitching about these major shortcomings of mkudffs for ages now in several places (mailed lots of people, even filed a bug report for K3B: ), but it appears nobody seems to care… Nonetheless, these tools claim in their changelog and release notes that they can “burn Bluray” – No. You can burn stuff onto BD-Rs using Bluray-Burners, but you CAN’T burn actual Blurays!”

With that out of the way, let’s look at this not-quite-solution that works for some players, but clearly not all… And remember, we need your help to drive continued support for UDF 2.5/2.6 in linux! Do your part! File a Bug!

Next, we need to make a UDF image file from the output of tsMuxeR. We have to create a UDF filesystem by hand, as a file on the regular system, mount it, copy the data into it, unmount, and then burn the resulting image.

Install udftools from the repos.
create a the file where the filesystem image will be stored:

mkudffs --vid="Blu-ray Movie" --media-type=hd --utf8 ./blu-ray.udf 11826176

Next, create, and mount that image as a file system. (PS, if you’re curious about the number ‘11826176’, its the number of 2k-blocks free on the BD-RE disc after formatting).

$ sudo mkdir /mnt/blu-ray
$ sudo mount ./blu-ray.udf /mnt/blu-ray -o loop

Copy the Blu-ray file structure created by tsMuxeR

$ sudo cp -R /path/to/bluray-content /mnt/blu-ray


$ sudo umount /mnt/blu-ray

This should create the contents of the Blu-Ray disc in the blu-ray.udf file. Now we can burn this to the BD-R/BD-RE disc.

Again, this shouldn’t take too long. Just be aware, throughout the muxing and mkudfiso steps, you are making copies of content which is probably very large. Expect hundreds of gigabytes to be consumed by this process, between the uncompressed MOV, the then compressed H.264 and WAV, the mux of those (which doesn’t recompress, only muxes them), and then a final copy of everything combined into the ISO. Essentially, you will have 3 copies of your film, NOT including the uncompressed version.

Once you’ve made the ISO, you should be able to burn it with growisofs. First use wodim to identify your BD burner:

$ wodim --devices
wodim: Overview of accessible drives (1 found) :
 0  dev='/dev/scd0'	rwrw-- : 'HL-DT-ST' 'BD-RE  GBW-H20L'

In my case, my BD-RE is /dev/scd0.

Before we do the burn, we need to format. Supposedly, ‘growisofs’ will format for us on-the-fly, but its not perfect and we’ll likely see an error and the disk won’t finalize. So, we use dvd+rw-format to to the format first:

$ dvd+rw-format -ssa=default /dev/scd0

Then I’ll do the actual burn using ‘growisofs’ with the following command:

$ growisofs -dvd-compat -Z /dev/scd0=blu-ray.udf

Word on the street is, you can also use Nero to create/burn the UDF, though its not OFS, but it is relatively cheap (assuming it does the job).

There is room for improvement in this whole process…
The UDF image is created to a fixed size, based on the size of the BD-RE media. In fact, your movie (or other BD content) may be less than that. Ideally, we’d only like to burn the size of our data, which in this case, we won’t know until after we encode and mux. We could figure the number of blocks required (UDF’s default is 2048b-per-block), by dividing the resulting space required by the BD content after muxing/generating the folder structure by 2048, rounding up, and allocating the UDF filesystem to that number of blocks.

With the use of mkudffs and udftools, you should be able to create and mount the udf image file before running tsMuxeR, and then have tsMuxeR render directly into the UDF mount. Then unmount, and burn the image. The downside is going this route, you don’t know what size to make the image before doing the mux. You could make an educated guess: (size of WAV + size of H264 * 1.10), but regardless, you’ll end up with wasted space.

There is still a real need for a tool like mkudfiso, that can take existing data and stuff it into a UDF file. I would encourage anyone capable and knowledgeable, to consider submitting a patch to the linux-udf/udftools for a tool that provides the same functionality as mkudfiso, but that which actually works, and is part of the standard implementation.

udftools, and mkudffs appears to allow creating the UDF directly to the media, though the documentation is poor. In theory, you should be able to mkudffs to /dev/scd0, mount, copy and unmout, but its unclear in the documentation how to actually start and finalize the burn. I didn’t have the patience to continue investigating.

UPDATE: In the spirit of awesomeness, I spent the better part of today putting all of this into a nice, slick, and hopefully reliable script. Copy/paste the text below into a file “mkbdmovie”, put the file into /usr/local/bin (or /usr/bin), and chmod +x the file, it should mostly do the rest for you. It won’t install ffmpeg or udftools, or download tsMuxeR, but it will tell you when you don’t have them, and tell you how to get them. It even has some very basic usage if you run it without parameters. A couple of notes that aren’t mentioned in the usage:

If you don’t specify a movie with -m, you can specify -a and -v to pass the already encoded elements to the muxer, in which case it won’t re-encode the H264/PCM, only remux . If you like, you can specify -a and -v in addition to -m, and -a and -v will specify where to render the H.264 and PCM assets, so they can be saved and reused again if you have to remux everything again later.

In order for the script to mount and copy the BDMV folder structure into the UDF image, mount and cp have to be ran via sudo. It will prompt you for your password at that point. If anyone has a solution to that, I’d love to include it in the script.

The script figures the size of the blu-ray folder structure after muxing, and creates the UDF image just large enough to hold that data (per my “room for improvements” above).

The script creates a meta file used by tsMuxeR to determine how to mux everything, including specifying chapter locations. I used some stupid values. I meant to add another parameter to allow specifying them on the command line, but then I got bored. If you want different chapter locations (times) (which you most likely will) you’ll need to modify the script to reflect the new times. It should be pretty self-evident what needs to be changed. If you really want to mix things up, you can run the tsMuxerGUI to find/set all of the parameters you like, and then copy the metadata information into the script.

Finally, the script only creates an UDF file (specified as the last parameter on the command line) from an existing movie. You still need to render to movie out of your NLE/finishing tool, and also run the growisofs command above to do the actual burn.


FFMPEG=`which ffmpeg`
TSMUXER=`which tsMuxeR`
MKUDFFS=`which mkudffs`

function usage {
	echo "The arguments to use are"
	echo "-m: The movie file to encode (ffmpeg compatible)"
	echo "-v: The h.264 video asset to mux for BD content"
	echo "-a: The WAV audio asset to mux for BD content"
	echo "-b: The bitrate for h.264 video encoding (-m), default is 25M"

function test_ffmpeg_x264 {
	# Test whether ffmpeg has libx264 support compiled in
	if [ `ffmpeg -formats 2>/dev/null| grep x264 | cut -c 3-4` != "EV" ]; then
		echo "FFMPEG not compiled with libx264 support." | exit

function test_ffmpeg {
	if [ ! -x $FFMPEG ]; then
		echo 'Could not find FFMPEG in the path.  Try "sudo apt-get install ffmpeg".' | exit

function test_tsmuxer {
	if [ ! -x $TSMUXER ]; then
		echo 'Could not find tsMuxeR in the path.  Download from' | exit

function test_mkudffs {
	if [ ! -x $MKUDFFS ]; then
		echo 'Could not find mkudffs in the path.  Try "sudo apt-get install udftools".' | exit

function test_dependancies {
	echo 'Verifying dependancies...'

function make_bluray_streams {
	#convert the movie to HQ H.264 and WAV
	echo "Encoding ${MOVIE} video to H.264, 1920x1080, ${BITRATE}bps - Pass 1"
	$FFMPEG -i $MOVIE -s hd1080 -f rawvideo -an -pass 1 -vcodec libx264 -vpre slowfirstpass -b $BITRATE -bt $BITRATE -threads 0 -y /dev/null >>mkbdmovie.log 2>&1
	echo "Encoding ${MOVIE} video to H.264, 1920x1080, ${BITRATE}bps - Pass 2"
	$FFMPEG -i $MOVIE -s hd1080 -an -pass 2 -vcodec libx264 -vpre hq -b $BITRATE -bt $BITRATE -threads 0 $H264_FILE >>mkbdmovie.log 2>&1
	echo "Encoding ${MOVIE} audio to PCM, 24bps, 48000 - Pass 1"
	$FFMPEG -i $MOVIE -acodec pcm_s24le -ar 48000 $WAV_FILE >>mkbdmovie.log 2>&1

function mux_bluray_assets {
	echo 'Muxing streams and generating BDMV file structure'
	# create the metafile needed by tsMuxeR

	echo "MUXOPT --no-pcr-on-video-pid --new-audio-pes --blu-ray --vbr  --custom-chapters=00:00:00.000;00:05:00.000;00:10:00.000;00:15:00.000;00:20:00.000;00:25:00.000 --split-size=2GB --vbv-len=500" > $TSMUXER_META
	echo "V_MPEG4/ISO/AVC, \"$H264_FILE\", fps=23.976, insertSEI, contSPS, ar=As source" >> $TSMUXER_META
	echo "A_LPCM, \"$WAV_FILE\", lang=eng" >> $TSMUXER_META

	# mux the two files and generate the BR-structure
	$TSMUXER $TSMUXER_META $BDMV_PATH >>mkbdmovie.log 2>&1

function create_udf_image {
	echo "Creating UDF Image: ${UDF_IMAGE}"
	# calculate the UDF size necessary (as 2k blocks) to fit the data
	UDFSIZE=`du -s -B 2K $BDMV_PATH | cut -f 1`

	# make the udf filesystem, and mount it
	if [ -e $UDF_IMAGE ]

	#MKUDFCMD="/usr/bin/mkudffs --vid=\"$3\" --media-type=hd --utf8 \"$2\" $UDFSIZE"
	$MKUDFFS --media-type=hd --utf8 $UDF_IMAGE $UDFSIZE >>mkbdmovie.log 2>&1
	sudo mount $UDF_IMAGE $UDFWORKSPACE -o loop >>mkbdmovie.log 2>&1

	# Copy the source into the UDF Workspace
	sudo cp -Rv $BDMV_PATH/* $UDFWORKSPACE >>mkbdmovie.log 2>&1

	#unmount, cleanup
	sudo umount $UDFWORKSPACE >>mkbdmovie.log 2>&1

function cleanup {
	sudo rm -rf $UDFWORKSPACE $TSMUXER_META >>mkbdmovie.log 2>&1
	if [ $TEMP_H264 = 1 ]; then
		sudo rm -rf $H264_FILE >>mkbdmovie.log 2>&1
	if [ $TEMP_WAV = 1 ]; then
		sudo rm -rf $WAV_FILE >>mkbdmovie.log 2>&1



if [ $# -le 1 ]; then
	echo "Invalid or insufficient parameters."

while [ $# -gt 1 ]
	case $1
			shift 2

			shift 2

			shift 2
			echo "Using h.264 file: ${H264_FILE}"

			shift 2
			echo "Using WAV file: ${WAV_FILE}"



if [ $MUX_ONLY = 0 ]; then


HVX/P2/MXF Media in Linux

•May 28, 2010 • 18 Comments

I’ve been doing some transcoding lately for a client, who shot some industrial footage on an HVX200 and wanted it transcoded to an alternate format for editing (I guess they didn’t have DVCPROHD codecs). Anyway, it took a bit of figuring out, but here is what I found out.

FFMPEG supports MXF, but apparently not the packaged MXF files that are stored on the P2 cards. There is a handy C++ library and accompanying tools from, that can do a handful of operations on MXF files.
It’s not terribly intuitive, but apparently, if you use the ‘mxfsplit’ command on the MXF files in the /CONTENTS/VIDEO folder, it will generate a subsequent MXF ‘stream’ that, counter-intuitively contains the video and all the audio streams in a single MXF file, which can then be processed by FFMPEG. Unfortunately, mxfsplit doesn’t let you specify an output filename; it generates one from the original MXF filename that is anything but, well, intuitive (I like that word today, apparently). So I wrote up a quick script to convert a named MXF file into a FFMPEG compatible MXF file (with the same name). You can then use the ‘find’ command to process all the files on your media. I recommend making a backup before you do any of this.

The first thing you’ll need to do is go download the MXFLIB package from FreeMXF (  Download, untar, and do the usual build stuff:

unzip mxflib-beta-1.0.1-gcc4.3-v2.tar.gz
tar -xvf mxflib-beta-1.0.1-gcc4.3-v2.tar
cd mxflib-beta-1.0.1-gcc4.3-v2/
sudo make install

Once we have that, we need the following script, which will call mxfsplit with the path to the video file to convert, and output a new MXF stream, which will contain all the audio as well.  The script will move/rename the stream file, to a more friendly name/location.


Below is an updated script… an unknowning collaboration as a result of a number of comments, and my own attempt to re-use my original script…  The following should be a lot more robust.

Paste the following into vi:

for i in $1/VIDEO/*.MXF
STREAM=`mxfsplit -m $i | awk '{ FS="( |=)"; if ( $0 ~ /File=/ ) { print $6 } }'`
OUTPUT="`basename $i .MXF`"

Don’t forget to mark the script as executable:

chmod +x ./

Pass as the one and only argument to the script, the location of your CONTENTS directory of your MXF tree.  It will make a NEW COPY of your MXF files, located in your current working directory (and will NOT replace/modify your originals).


Make sure you have enough free disk space for a copy of all your media.

Once that is finished, you should have a new batch of MXF files that are readable by FFMPEG.  From here you can do any transcoding necessary.  If you are particularly sharp, you might consider modifying the script above to do your ffmpeg transcoding on the fly (instead of calling “mv $i…”  you’d call “ffmpeg $i -target ntsc-dvd $OUTPUT.mp2”).

ffmpeg -i myp2media.mxf -target ntsc-dvd p2media.mp2

or play directly via ffplay:

ffplay myp2media.mxf

Hacking the Foscam – Part IV

•May 10, 2010 • 108 Comments

As a result of bricking and subsequently recovering my Foscam, I found a few interesting things out tonight.

If you enter the ‘debug’ mode on the camera, and issue the “boot” command, you can retain access to the console once the camera has booted (this may be possible directly, but it wasn’t apparent).  From here you can access the camera as a linux machine, with standard shell commands, browse directories, etc.

The WebUI firmware is mounted on /home, and it is stored in a separate volume in flash memory.  Despite my bricked camera, and erasing and reloading volumes 6 and 7 with romfs.img and images, my hacked WebUI was still present, as was all of my camera settings.  This is particularly interesting since, while the romfs.img may appear to be limited to a physical size of 2MB, it should be possible to load larger binaries onto the camera (like sshd) from the WebUI firmware, and still have the /etc/init (in romfs) run them from /home/sshd, and possibly also specify a local /home/sshd.conf file.  I am also curious if its possible to symlink /home/root to /, thereby allowing access to the entire memory from the WebUI.

What’s more interesting (though lower-level) about all of this WebUI stuff, is that there are other aspects of the flash memory that are utilized, and can possibly be reallocated for different needs.  Ie, now there is a 2MB ROMFS, and a who-know-how-big WebUI, but in theory, from what I was seeing, you could potentially combine these volumes into a single volume that could more easily accommodate larger images.

The Sparkfun USB-Serial UART interface I bought was actually small enough that I was able to push it inside the camera, still wired onto the board, and reassemble the camera.  My son is insisting that I “put it in there all the time”, and make small cutout for the Mini-B USB port, so that I can connect the camera via USB at any time, without having to disassemble the camera.  A very tempting thought.

I noticed after booting the camera, and looking around in the /dev folder, there are 2 video devices.  I need to find a way to dump the data from these devices, selectively across the network, or console.  I find the prospect of two devices interesting.  My gut suspicion is that they are for different image resolutions (since the camera supports 2 modes, 320×240 and 640×480).  One of my long-term hacking goals is to write code that will allow the camera to track motion.  This would be a great start along those lines.

At this point, the ideal next step would be finding source for the ‘camera’ application, but I’d be happy with some decompiled sources.  I guess its about time I start installing the ARM/ucLinux build tools.

Recovering the Foscam

•May 10, 2010 • 7 Comments

Got my USB->Serial UART adapter from SparkFun today. A few minutes wiring up the camera’s port per the recovery guide on GadgetVictims, a whole lot of time spent trying to find good XMODEM tools on linux (cutecom, it’s anything but cute, but it works), and a few minutes poking around with the console, before rebooting, and everything is back to FUN!

Hacking the Foscam FI8908W – Part III

•May 6, 2010 • 5 Comments

Well, some good news. Some bad news.

I was able to put some more code together the last few days, and updated the foscam-util project on The changes utilize Lawrence’s findings relating to the system firmware file, and now the utility ‘fostar’ can pack and unpack the system firmware. With a little bit of magic using romfs and ucLinux build tools, its possible to rebuild the firmware file, and thereby possible to do install your own code on the camera.

The bad news. My tools, right now, just work in theory. I bricked my camera tonight after uploading a new firmware. I was able to unpack the original firmware, repack it, and upload it without any problem. But when I modified the romfs image, adding a couple of network binaries (telnetd, ftpd, and ping), as well as an empty /etc/ftpd.conf file, the camera loaded the firmware, didn’t report that it was invalid, but failed to reboot. Earlier I had tried with a firmware that included ssh and sshd binaries. The uploader reported the image was invalid, likely because the entire firmware package exceeded 2MB. I’m not sure what to make of the new situation though. I’m tearing apart my camera and hopefully I can plug into the JTAG and force the old firmware again.

Frustrating, but the only time progress isn’t made is when you’re not working on the problem, right? Hopefully I can find the parts to get my camera back up sooner than later.

Until I find out what’s wrong… be careful using the foscam-util stuff… Just say’n.