Linux is understandably popular as an operating system for embedded systems. SoC vendors in particular like to supply a Linux "distribution" that is tailored for their platform. This "distribution" usually includes the binaries for an appropriate toolchain. In my opinion you should stop using this toolchain long before you are ready to release your product.
The binary toolchain supplied by your vendor is easy, it might even be statically linked making it even easier to use. Just plonk it in the right place and everything magically works.
But,
1. One day you're going to have a bug. That bug will either require some debugging in files that are supplied as part of the toolchain (ld.so, glibc) or require you to make some modifications to the toolchain in order to help investigate the problem. You might even find a toolchain bug that you need to fix. This sort of problem is almost guaranteed to occur at a point when it is deemed too dangerous to switch to a self-compiled tool chain. That's if your lucky and you have the source code for the tool chain and it actually compiles for you.
2. In order to comply with the GPL you need to release working source code for certain bits of the tool chain anyway. How can you be sure that this stuff actually compiles unless you have done so yourself?
3. You need to support your product long after your vendor has moved on to another generation of chips. Will their toolchain still work on whatever host operating system you are using then?
So, at the very least you should get the source code for the toolchain from your vendor and then compile it yourself. Use the version you compiled yourself. This leaves you in a much better position when the unexpected occurs. If your vendor won't give you the source for a toolchain they've given you in binary form then find another vendor that understands software licensing.
Of course you could just compile your own tool chain from scratch and use that but creating cross-compilation toolchains is certainly not easy - perhaps that subject is worthy of a future post.
Friday, 14 November 2008
Tuesday, 5 August 2008
mipsel-linux-strip: Not enough room for program headers, try linking with -N
I was rather confused when I started getting this error when attempting to strip a Linux MIPS shared library when I moved to a new toolchain that used binutils-2.18:
Other shared libraries could be successfully stripped.
The clue was that these shared libraries were generated with an earlier toolchain that used an older version of binutils.
It turns out that this is caused by binutils-2.18 wanting to add a NULL segment even when the binary didn't originally have one. Applying the fix makes the problem go away.
I only really mention this because the fix description doesn't contain the error message I saw thus making it hard to Google for a solution.
BFD: st4lu6Am: Not enough room for program headers, try linking with -N
mipsel-linux-strip: st4lu6Am: Bad value
Other shared libraries could be successfully stripped.
The clue was that these shared libraries were generated with an earlier toolchain that used an older version of binutils.
It turns out that this is caused by binutils-2.18 wanting to add a NULL segment even when the binary didn't originally have one. Applying the fix makes the problem go away.
I only really mention this because the fix description doesn't contain the error message I saw thus making it hard to Google for a solution.
Sunday, 20 April 2008
Accessing older Rio MP3 players as an unprivileged user
By default unknown USB devices seemed to be owned by user root and group root on Ubuntu Gutsy and Hardy. This is inconvenient when the device is an MP3 player that you'd rather access as a normal user.
I still use my Rio S50 flash player regularly. After it has had its space boosted a little with an SD card it's perfect for listening to MP3s of radio programmes and podcasts on because it is small enough that it is easy to keep track of what is on it and the AA battery lasts forever.
Anyway, I use the rioutil tool for downloading content to the player. It uses libusb to talk to the device without requiring a kernel driver.
With Ubuntu Hardy the device nodes that libusb uses seems to have changed which broke my old rules. After a little bit of strace I was able to come up with the following udev rules which I placed in /etc/udev/rules.d/45-rio.rules:
Of course your user must be a member of the plugdev group, or you can specify a different group if you wish.
If you want to make other Rio flash portables work then just repeat the rule specifying all product numbers from 5001 (Rio600) to 500f (Rio Cali).
I still use my Rio S50 flash player regularly. After it has had its space boosted a little with an SD card it's perfect for listening to MP3s of radio programmes and podcasts on because it is small enough that it is easy to keep track of what is on it and the AA battery lasts forever.
Anyway, I use the rioutil tool for downloading content to the player. It uses libusb to talk to the device without requiring a kernel driver.
With Ubuntu Hardy the device nodes that libusb uses seems to have changed which broke my old rules. After a little bit of strace I was able to come up with the following udev rules which I placed in /etc/udev/rules.d/45-rio.rules:
SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", \These rules may well work on Gutsy too.
SYSFS{idVendor}=="045a", SYSFS{idProduct}=="5006", GROUP="plugdev"
Of course your user must be a member of the plugdev group, or you can specify a different group if you wish.
If you want to make other Rio flash portables work then just repeat the rule specifying all product numbers from 5001 (Rio600) to 500f (Rio Cali).
Friday, 18 April 2008
Cross-compiling boost 1.34.x and 1.35.0
There seem to be lots of people asking how to cross-compile boost and very few answers. One of the better answers works for v1.33.x but breaks with v1.34.0.
After digging around for a while trying to make it work I was finally given the answer by the esteemed Peter Hartley who had managed to cross-compile boost 1.34 as part of his Just The Linux distribution. His solution seems to work for v1.35.0 too.
The trick that had eluded me until that point was to tell both the user-config.jam file and bjam about the cross compiler.
Something like:
After digging around for a while trying to make it work I was finally given the answer by the esteemed Peter Hartley who had managed to cross-compile boost 1.34 as part of his Just The Linux distribution. His solution seems to work for v1.35.0 too.
The trick that had eluded me until that point was to tell both the user-config.jam file and bjam about the cross compiler.
Something like:
echo "using gcc : : nicearch-linux-g++ ;" > user-config.jamIf, like me, you want to only generate static libraries and support multiple builds in the same tree then you might need a bit more cleverness:
make BJAM_CONFIG="-sGXX=nicearch-linux-g++" install
build=/tmp/nicearch/buildThis should be relatively easy to turn into a buildroot package file but I'm no longer using buildroot to build boost so I didn't need to.
staging=/tmp/nicearch/staging
CXX=nicearch-linux-g++
CC=nicearch-linux-gcc
mkdir -p $build $staging
echo "using gcc : : $CXX ;" > $build/user-config.jam
bjam --toolset=gcc -sGXX=$CXX -sGCC=$CC \
--prefix=$staging --build-dir=$build \
--user-config=$build/user-config.jam --without-python \
variant=release link=static threading=multi
Tuesday, 25 March 2008
Investigating SSL SMTP configurations with telnet-ssl
I use the pretty standard Debian Exim4 configuration on my mail server. I don't define AUTH_SERVER_ALLOW_NOTLS_PASSWORDS so plain text authentication is not supported unless the connection is encrypted.
I was faced with a mail client that was having trouble connecting. My usual tactic when faced with such problems is to try things via a manual SMTP connection to see what's going on. The only problem is that I couldn't get to the point that authentication was advertised unless I issued a STARTTLS command and at that point just typing stuff into telnet(1) isn't enough.
The telnet-ssl package is normally used to make direct SSL connections but it is also capable of making plain connections which can then be turned into an SSL connection later. This is easy to use during an SMTP connection.
First we need to make a plain connection to the SMTP port:
Now when we ask for the capabilities we get the AUTH types we expected:
I was faced with a mail client that was having trouble connecting. My usual tactic when faced with such problems is to try things via a manual SMTP connection to see what's going on. The only problem is that I couldn't get to the point that authentication was advertised unless I issued a STARTTLS command and at that point just typing stuff into telnet(1) isn't enough.
The telnet-ssl package is normally used to make direct SSL connections but it is also capable of making plain connections which can then be turned into an SSL connection later. This is easy to use during an SMTP connection.
First we need to make a plain connection to the SMTP port:
somewhere.else.com:~> telnet-ssl mail.somewhere.com 25So, now we're connected so let's check that SSL connections are supported by querying the capabilities:
Trying 4.3.2.1...
Connected to mail.somewhere.com.
Escape character is '^]'.
220 mail.somewhere.com ESMTP A secret server
ehlo meNow we can start SSL:
250-mcrowe.com Hello mac at somewhere.else.com [1.2.3.4]
250-SIZE 52428800
250-PIPELINING
250-STARTTLS
250 HELP
starttlsAt this point we need to get back to the telnet prompt to switch to SSL mode. The default telnet escape character is Ctrl ]:
220 TLS go ahead
^]If the server has a valid certificate then you probably won't see any output here.
telnet-ssl> startssl
SSL: Server has a self-signed certificate
SSL: unknown issuer: /C=Ptoing/ST=Wibble/CN=nowhere.com/emailAddress=postmaster@nowhere.com
Now when we ask for the capabilities we get the AUTH types we expected:
ehlo meBy the time I'd got to this point I'd discovered my problem: no authentication types were being advertised at all.
250-nowhere.com Hello mac at somewhere.else.com [1.2.3.4]
250-SIZE 52428800
250-PIPELINING
250-AUTH PLAIN LOGIN
250 HELP
Monday, 18 February 2008
Why I Like Perforce
After lots of articles explaining why I hate Perforce I thought it only fair to write a few explaining some of the things I like about it. I'm sure that other version control systems do a better job than Perforce does with some of these things but in my opinion Perforce does them better than CVS and current stable versions of Subversion at least.
Merge tracking
Perforce keeps track of what previous changes have been merged (or integrated) into a working tree and commits this information along with the files when the files are submitted. This means that it is often trivial to merge changes in from a branch, do a quick build to check that everything is fine and then check them in.
Change lists
Although it doesn't excuse the submit command not taking multiple filename arguments I think that I mostly like the idea of being able to group my changed files into change lists. The change list can be created and the diff checked over before finally checking it in. There's a certain risk of failing to check important files in if they also happen to be in a different long-lived changelist with this tactic though. It would be ideal if somehow change lists could be at less than file granularity but I'm not sure how this could be implemented without offering a list of patch hunks.
Perforce Proxy
P4P is essential when working remotely with a large depot. It intelligently caches file revisions that pass through it so that future requests for those files can just be retrieved from the cache greatly increasing performance and reducing network traffic. It's not perfect in that if you submit a file through the proxy it doesn't appear to cache the contents immediately thus forcing a further download of the time. Nevertheless if you have many clients or multiple users in the same location then a proxy is worth the tiny amount of effort it takes to set it up.
Update 2008/11/17: Subversion 1.5 now supports (to some degree) all of these features.
Merge tracking
Perforce keeps track of what previous changes have been merged (or integrated) into a working tree and commits this information along with the files when the files are submitted. This means that it is often trivial to merge changes in from a branch, do a quick build to check that everything is fine and then check them in.
Change lists
Although it doesn't excuse the submit command not taking multiple filename arguments I think that I mostly like the idea of being able to group my changed files into change lists. The change list can be created and the diff checked over before finally checking it in. There's a certain risk of failing to check important files in if they also happen to be in a different long-lived changelist with this tactic though. It would be ideal if somehow change lists could be at less than file granularity but I'm not sure how this could be implemented without offering a list of patch hunks.
Perforce Proxy
P4P is essential when working remotely with a large depot. It intelligently caches file revisions that pass through it so that future requests for those files can just be retrieved from the cache greatly increasing performance and reducing network traffic. It's not perfect in that if you submit a file through the proxy it doesn't appear to cache the contents immediately thus forcing a further download of the time. Nevertheless if you have many clients or multiple users in the same location then a proxy is worth the tiny amount of effort it takes to set it up.
Update 2008/11/17: Subversion 1.5 now supports (to some degree) all of these features.
Labels:
perforce,
rant,
version control
Monday, 11 February 2008
Why I Hate Perforce: 4. It's difficult to defer existing work
This is part of a series of articles explaining why I hate Perforce. Please see "Why I Hate Perforce: The Background" first.
The real world being the way it is work is often started or even mostly completed and then something more important comes along which means that work must be deferred, possibly indefinitely. It is important, if only for programmer self esteem, to archive that work safely before continuing. This needs to be done with the minimal of effort and risk because it generally happens only when something urgent needs to be done.
There are a number of ways of doing this.
1. If the work had been done on a task branch then any pending changes can just be checked in and the branch kept around but not merged for as long as necessary.
Unfortunately task branches have overhead and aren't always used. Creating a task branch retrospectively would seem like a sensible tactic but is hard work with Perforce because there's no equivalent to cvs up -r or svn switch to switch to a branch whilst preserving changes in the working copy. I've tried just updating the client spec to point to the new branch hoping that it would offer to merge the changes but Perforce just complains that it cannot clobber the files since they are opened for editing.
2. Keep the working copy (or client in Perforce terminology) around forever. The downside to this is that a large amount of disk space could be taken up and any finger macros may need to be re-learnt. It is also hard for someone else to continue the work because the Perforce client will be owned by the original author.
3. Archiving the entire working copy as a unit (e.g. using tar(1) or zip(1)) then revert the files in the working copy so that work can continue. This doesn't work well with Perforce because the working copy state is stored on the server. In order to do anything meaningful with the archive you'd need to revert your working copy back to the current revision at the time the working copy was created. If this isn't done there's a risk of confusion as to where changes were made. Other systems that keep sufficient state in the working copy (such as CVS and Subversion) don't suffer from this problem. In fact the working copy can be moved to a different location (or even a different machine) and work can continue there.
4. Produce a patch based on the current state of the depot that can be applied later. This would be a perfectly good solution if it weren't such a pain to generate sensible patch files with Perforce. Having tried hard to make p4 diff generate something acceptable to patch(1) I ended up writing a Ruby script to do it. This script is available from my Perforce Scripts page.
When I had to do this recently I ended up taking option 4. It did seem to work but it was far more effort than I expected. Next time it should be easier because I've already got the script!
Now read about Why I Like Perforce.
Edit 2010/12/01: Since this article was written Perforce 2009.2 has introduced shelving. This is certainly useful but doesn't solve many of the problems raised here. In particular changes can only be unshelved back to the same location in the depot (albeit perhaps on a different client spec or by a different user.) This means that moving the changes to a different branch is just as painful as is creating a branch retrospectively for shelved changes.
The real world being the way it is work is often started or even mostly completed and then something more important comes along which means that work must be deferred, possibly indefinitely. It is important, if only for programmer self esteem, to archive that work safely before continuing. This needs to be done with the minimal of effort and risk because it generally happens only when something urgent needs to be done.
There are a number of ways of doing this.
1. If the work had been done on a task branch then any pending changes can just be checked in and the branch kept around but not merged for as long as necessary.
Unfortunately task branches have overhead and aren't always used. Creating a task branch retrospectively would seem like a sensible tactic but is hard work with Perforce because there's no equivalent to cvs up -r or svn switch to switch to a branch whilst preserving changes in the working copy. I've tried just updating the client spec to point to the new branch hoping that it would offer to merge the changes but Perforce just complains that it cannot clobber the files since they are opened for editing.
2. Keep the working copy (or client in Perforce terminology) around forever. The downside to this is that a large amount of disk space could be taken up and any finger macros may need to be re-learnt. It is also hard for someone else to continue the work because the Perforce client will be owned by the original author.
3. Archiving the entire working copy as a unit (e.g. using tar(1) or zip(1)) then revert the files in the working copy so that work can continue. This doesn't work well with Perforce because the working copy state is stored on the server. In order to do anything meaningful with the archive you'd need to revert your working copy back to the current revision at the time the working copy was created. If this isn't done there's a risk of confusion as to where changes were made. Other systems that keep sufficient state in the working copy (such as CVS and Subversion) don't suffer from this problem. In fact the working copy can be moved to a different location (or even a different machine) and work can continue there.
4. Produce a patch based on the current state of the depot that can be applied later. This would be a perfectly good solution if it weren't such a pain to generate sensible patch files with Perforce. Having tried hard to make p4 diff generate something acceptable to patch(1) I ended up writing a Ruby script to do it. This script is available from my Perforce Scripts page.
When I had to do this recently I ended up taking option 4. It did seem to work but it was far more effort than I expected. Next time it should be easier because I've already got the script!
Now read about Why I Like Perforce.
Edit 2010/12/01: Since this article was written Perforce 2009.2 has introduced shelving. This is certainly useful but doesn't solve many of the problems raised here. In particular changes can only be unshelved back to the same location in the depot (albeit perhaps on a different client spec or by a different user.) This means that moving the changes to a different branch is just as painful as is creating a branch retrospectively for shelved changes.
Thursday, 24 January 2008
Why I Hate Perforce: 3. It's hard to find files that need adding
This is part of a series of articles explaining why I hate Perforce. Please see "Why I Hate Perforce: The Background" first.
When adding new files to a working tree it is of paramount importance that these files get checked into the revision control system at the correct point. The difficult part is finding the files that need adding - once that's been done adding them is easy.
Finding new files is made difficult because working trees usually contain a lot of other files that shouldn't be added to revision control: editor backup files, files generated during compilation, backup modified versions of files that are under revision control that you want to keep around as reminders, temporary files that haven't been cleaned up properly. It's difficult to separate the wheat from the chaff.
Other revision control systems solve this problem by allowing such files to be added to ignore lists. Usually there's a global ignore list for files that are almost always ignored such as object files and editor backup files. In addition there's a specific ignore list for each directory; this is useful for generated header files and patterns that would otherwise be too broad. CVS uses a file named .cvsignore and Git uses the similar .gitignore. Subversion uses a directory property named svn:ignore.
Perforce has no equivalent to this functionality. The Eclipse Perforce plugin seems to have invented the concept of a .p4ignore file out of necessity but I haven't tried it.
My currently suboptimal workaround for this is a script that runs find(1) and passes the results to p4 fstat to identify files that aren't under control then weeds out common files that should be ignored. I've got parts of an improved Ruby version of this script working but haven't yet polished it enough for release.
Now read Part Four.
When adding new files to a working tree it is of paramount importance that these files get checked into the revision control system at the correct point. The difficult part is finding the files that need adding - once that's been done adding them is easy.
Finding new files is made difficult because working trees usually contain a lot of other files that shouldn't be added to revision control: editor backup files, files generated during compilation, backup modified versions of files that are under revision control that you want to keep around as reminders, temporary files that haven't been cleaned up properly. It's difficult to separate the wheat from the chaff.
Other revision control systems solve this problem by allowing such files to be added to ignore lists. Usually there's a global ignore list for files that are almost always ignored such as object files and editor backup files. In addition there's a specific ignore list for each directory; this is useful for generated header files and patterns that would otherwise be too broad. CVS uses a file named .cvsignore and Git uses the similar .gitignore. Subversion uses a directory property named svn:ignore.
Perforce has no equivalent to this functionality. The Eclipse Perforce plugin seems to have invented the concept of a .p4ignore file out of necessity but I haven't tried it.
My currently suboptimal workaround for this is a script that runs find(1) and passes the results to p4 fstat to identify files that aren't under control then weeds out common files that should be ignored. I've got parts of an improved Ruby version of this script working but haven't yet polished it enough for release.
Now read Part Four.
Tuesday, 22 January 2008
Why I Hate Perforce: 2. Working copy state is stored on the server
This is part of a series of articles explaining why I hate Perforce. Please see "Why I Hate Perforce: 1. The Background" first.
A working copy (client in Perforce terminology or check-out in CVS terminology) contains absolutely nothing but the files that you instructed Perforce to place there from the depot (using your client specification) and files that you caused to be placed there yourself (e.g. object files, new source files, .p4config files etc.) Perforce itself keeps no state information in your working tree (although you may choose to with .p4config files).
From some points of view this can seem like quite a good idea. Tools such as find(1) and grep(1) can't accidentally look at such data. There's no extra directories (hidden or otherwise) to confuse the uninitiated. But this information must be stored somewhere and Perforce chooses to keep it all on the server. This has a number of consequences.
The most obvious implication of keeping all the state information on the server is that if the server is down or inaccessible then you cannot perform any operations that need that state. Perforce normally marks all files as read-only until an explicit request is made to edit them. Doing this requires a connection to the server. If such a connection is unavailable then it is necessary to resort to chmod(1) or attrib to make the file writable and then remembering to run p4 diff -se when the server is available again in order to correctly mark the files as editable. Editor plug-ins that provide version information automatically for version controlled files may block for a while until they discover that the server is unavailable.
Another annoyance with keeping the state information outside the working copy is that the working copy cannot easily be moved or copied elsewhere. This might be useful due to disk space constraints, wanting to shelve some work in progress or wanting to divide current work in two. I'll come back to this topic in a later article.
The alternative is to keep the state information locally. CVS keeps all working copy state in the working copy itself. Subversion keeps that along with pristine copies of source files which allows it to only send changes when submitting files and allows diff operations without contacting the server. This means that it is possible to copy and move around CVS and Subversion working copies and the state is copied or moved at the same time. SVK keeps information in a local per-user location but does allow moves as long as you keep it informed. Distributed version control systems keep so much information that the server is only required when new changes are to be pulled from it or pushed to it.
Now read Part Three.
A working copy (client in Perforce terminology or check-out in CVS terminology) contains absolutely nothing but the files that you instructed Perforce to place there from the depot (using your client specification) and files that you caused to be placed there yourself (e.g. object files, new source files, .p4config files etc.) Perforce itself keeps no state information in your working tree (although you may choose to with .p4config files).
From some points of view this can seem like quite a good idea. Tools such as find(1) and grep(1) can't accidentally look at such data. There's no extra directories (hidden or otherwise) to confuse the uninitiated. But this information must be stored somewhere and Perforce chooses to keep it all on the server. This has a number of consequences.
The most obvious implication of keeping all the state information on the server is that if the server is down or inaccessible then you cannot perform any operations that need that state. Perforce normally marks all files as read-only until an explicit request is made to edit them. Doing this requires a connection to the server. If such a connection is unavailable then it is necessary to resort to chmod(1) or attrib to make the file writable and then remembering to run p4 diff -se when the server is available again in order to correctly mark the files as editable. Editor plug-ins that provide version information automatically for version controlled files may block for a while until they discover that the server is unavailable.
Another annoyance with keeping the state information outside the working copy is that the working copy cannot easily be moved or copied elsewhere. This might be useful due to disk space constraints, wanting to shelve some work in progress or wanting to divide current work in two. I'll come back to this topic in a later article.
The alternative is to keep the state information locally. CVS keeps all working copy state in the working copy itself. Subversion keeps that along with pristine copies of source files which allows it to only send changes when submitting files and allows diff operations without contacting the server. This means that it is possible to copy and move around CVS and Subversion working copies and the state is copied or moved at the same time. SVK keeps information in a local per-user location but does allow moves as long as you keep it informed. Distributed version control systems keep so much information that the server is only required when new changes are to be pulled from it or pushed to it.
Now read Part Three.
Monday, 21 January 2008
Why I Hate Perforce: 1. The Background
I'm about to post a few articles explaining why Perforce and I just don't get on in many ways. But before I do I feel it is important that I make the background for these criticisms clear.
I've been using revision control systems for well over ten years. Initially I had brief outings with Microsoft Delta and then a longer and more painful experience with Microsoft SourceSafe on Windows even when sharing code among only three developers. Perforce is definitely a big improvement over these!
Once I learnt about CVS I started using that. Initially just for me on UNIX and Linux but later on shared projects that needed to compile on Windows too. I was forced to learn about tagging, vendor branches (and later why they suck in CVS) and merging. CVS wasn't perfect but it did work. I understood how it worked fundamentally, even to the point of fiddling around by hand in the repository when it became absolutely necessary.
I keenly watched the development of Subversion and periodically tried to import our CVS repository into it. I recommended to others starting new projects that they should choose Subversion rather than CVS.
I started a new job where everything was kept in a Perforce depot. I was used to the CVS workflow and initially felt a little out of water. It gradually dawned on me that many issues I had with Perforce were impeding or adding risk to my work. In the end I decided that some of these issues were fundamental in the Perforce design.
Of course Perforce has some very good features. It is certainly better than CVS in many ways. Perhaps I'll write articles about these too in the interest of fairness.
Because I come from the world of CVS it is quite likely that I'll accidentally use CVS terminology rather than Perforce terminology in these articles but I'll try not to!
Of course I may have missed features in Perforce or alternative techniques that invalidate some of points. If I have then please let me know via the comments.
Some of the scripts I use to work around the shortcomings I see in Perforce are available via my web page.
I should probably also note that I've also played with various other systems such as Bitkeeper, Clearcase, Arch, Bazaar, Darcs, Mercurial, SVK and Git. Of these the one I've tried to use most is Git and I would like to use it more given the chance.
Now read Part Two.
I've been using revision control systems for well over ten years. Initially I had brief outings with Microsoft Delta and then a longer and more painful experience with Microsoft SourceSafe on Windows even when sharing code among only three developers. Perforce is definitely a big improvement over these!
Once I learnt about CVS I started using that. Initially just for me on UNIX and Linux but later on shared projects that needed to compile on Windows too. I was forced to learn about tagging, vendor branches (and later why they suck in CVS) and merging. CVS wasn't perfect but it did work. I understood how it worked fundamentally, even to the point of fiddling around by hand in the repository when it became absolutely necessary.
I keenly watched the development of Subversion and periodically tried to import our CVS repository into it. I recommended to others starting new projects that they should choose Subversion rather than CVS.
I started a new job where everything was kept in a Perforce depot. I was used to the CVS workflow and initially felt a little out of water. It gradually dawned on me that many issues I had with Perforce were impeding or adding risk to my work. In the end I decided that some of these issues were fundamental in the Perforce design.
Of course Perforce has some very good features. It is certainly better than CVS in many ways. Perhaps I'll write articles about these too in the interest of fairness.
Because I come from the world of CVS it is quite likely that I'll accidentally use CVS terminology rather than Perforce terminology in these articles but I'll try not to!
Of course I may have missed features in Perforce or alternative techniques that invalidate some of points. If I have then please let me know via the comments.
Some of the scripts I use to work around the shortcomings I see in Perforce are available via my web page.
I should probably also note that I've also played with various other systems such as Bitkeeper, Clearcase, Arch, Bazaar, Darcs, Mercurial, SVK and Git. Of these the one I've tried to use most is Git and I would like to use it more given the chance.
Now read Part Two.
Labels:
perforce,
rant,
revision control
Wednesday, 16 January 2008
A 100% Linux household
It was at LinuxConf Europe 2007 back in September that I made the decision to really try and habitually run Linux day-to-day on my laptop. I've always had Linux installed on my laptop, initially Debian but when I was forced to reinstall the machine I decided to give Ubuntu a try and was impressed enough that stuff just worked on my not particularly Linux friendly laptop that I stuck with it.
Don't get me wrong: I've been a daily Linux user since 1994. I'd just not spent that much time running it as my desktop OS since leaving university. When I entered the world of work I found that I needed both Windows and Linux and got fed up with rebooting between them. I found that having one Linux machine running a VNC server and using a Windows box as a client was infinitely more usable than the reverse so I worked that way round. I used Linux via VNC for embedded software development and Windows for Windows software development. For much of the time my Windows box was effectively just used as a thin client. Often the Linux box was actually rather powerful and shared by many users.
So when I was in the position of having independent home server machines and desktop machines I ran Debian Linux on the server and Windows on the desktop. The Linux machine was the one that stayed on all the time. It was there I ran (and continue to run) mutt(1) to read my personal email and slrn(1) to read Usenet news. The Windows box was switched off or put into standby when I wasn't using it. When the desktop became a laptop the situation was the same except because the laptop was portable I installed Linux on it too so that I'd have access to Linux when I was away from home. I didn't really run Linux on it much but occasionally it proved useful.
But as I was sat at the conference I noticed that it seemed to mostly be the “suits” that dared to run Windows on their laptops at a Linux Conference. I wasn't a suit so I chose to always boot into Linux. I did the few things I needed to do easily and quickly enough. The conference left me feeling so positive about Linux in general that I decided that I needed to bite the bullet and abandon Windows at home. Windows was becoming very slow and annoying on the machine anyway so I had an added incentive to do so. Unfortunately Linux was rather slow too when I started using it in anger. I resorted to adding more memory and this helped greatly.
So, since the beginning of September I've only rebooted into Windows for two reasons. Once was to watch an episode of something that the Tivo missed using the BBC iPlayer (this was last year when Linux wasn't supported). The other was to satisfy my immediate desire to play with the Lego Mindstorms set I received for Christmas. I shouldn't need to do the first again and I've now tired of the visual programming language used by Lego Mindstorms and will investigate NXC.
I've managed to do everything else I needed to do under Linux. Some things are easier, some things are a little harder, most are faster but a few are slower. Thanks to user switching even my wife uses it for reading her email and web access. Some bugs continue to annoy me but nowhere near as much as the Windows task bar locking up for several minutes every so often just because it feels like it.
So, I've taken the plunge and I don't see myself going back. The next step is to work out how I can lose the Windows box at work too!
Don't get me wrong: I've been a daily Linux user since 1994. I'd just not spent that much time running it as my desktop OS since leaving university. When I entered the world of work I found that I needed both Windows and Linux and got fed up with rebooting between them. I found that having one Linux machine running a VNC server and using a Windows box as a client was infinitely more usable than the reverse so I worked that way round. I used Linux via VNC for embedded software development and Windows for Windows software development. For much of the time my Windows box was effectively just used as a thin client. Often the Linux box was actually rather powerful and shared by many users.
So when I was in the position of having independent home server machines and desktop machines I ran Debian Linux on the server and Windows on the desktop. The Linux machine was the one that stayed on all the time. It was there I ran (and continue to run) mutt(1) to read my personal email and slrn(1) to read Usenet news. The Windows box was switched off or put into standby when I wasn't using it. When the desktop became a laptop the situation was the same except because the laptop was portable I installed Linux on it too so that I'd have access to Linux when I was away from home. I didn't really run Linux on it much but occasionally it proved useful.
But as I was sat at the conference I noticed that it seemed to mostly be the “suits” that dared to run Windows on their laptops at a Linux Conference. I wasn't a suit so I chose to always boot into Linux. I did the few things I needed to do easily and quickly enough. The conference left me feeling so positive about Linux in general that I decided that I needed to bite the bullet and abandon Windows at home. Windows was becoming very slow and annoying on the machine anyway so I had an added incentive to do so. Unfortunately Linux was rather slow too when I started using it in anger. I resorted to adding more memory and this helped greatly.
So, since the beginning of September I've only rebooted into Windows for two reasons. Once was to watch an episode of something that the Tivo missed using the BBC iPlayer (this was last year when Linux wasn't supported). The other was to satisfy my immediate desire to play with the Lego Mindstorms set I received for Christmas. I shouldn't need to do the first again and I've now tired of the visual programming language used by Lego Mindstorms and will investigate NXC.
I've managed to do everything else I needed to do under Linux. Some things are easier, some things are a little harder, most are faster but a few are slower. Thanks to user switching even my wife uses it for reading her email and web access. Some bugs continue to annoy me but nowhere near as much as the Windows task bar locking up for several minutes every so often just because it feels like it.
So, I've taken the plunge and I don't see myself going back. The next step is to work out how I can lose the Windows box at work too!
Thursday, 10 January 2008
Dealing with SIGINT in spawned processes
I'm writing a Linux command line application that has the ability to spawn processes of the users' choosing when they want it to. My application waits for the process to finish and then continues. But this raised a problem: If the launched process took a while to run and the user presses Ctrl-C then not only does the spawned process get killed so does my process! In this regard I'd prefer to work much more like a shell and regain control after the spawned process has terminated.
In order to solve these problems I was forced to revisit stuff that I'd read about long ago but not fully understood the implications of at the time. Thanks are due to Zefram for pointing me in the right direction.
Both processes die because they are in the same process group. When the user hits Ctrl-C a SIGINT signal is sent to all processes in the active process group. The signal is not sent to the shell that started my application because the shell arranged for me to be in a new process group (by a means not dissimilar to that below).
Process groups have a group leader - in fact it is the process ID of the group leader that is used as the process group ID.
So, step one is to make sure that the spawned process runs in its own process group (which will also contain any processes it starts unless it takes specific action to the contrary). This is done by calling setpgid(2).
But unfortunately that is insufficient. When pressing Ctrl-C the SIGINT is still set to the process group that contains my application; therefore I exit leaving the spawned process still running.
In order to explain this properly I need to briefly mention sessions. For the purposes of this explanation you can think of a session as representing a terminal. Each session can have a number of process groups. One of these process groups will be the foreground process group and there may be background process groups. The above behaviour resulted because although I'd placed the spawned process in a different process group that process group was in the background (rather like running it from a shell in the background with &.)
I needed to resolve this problem by moving the spawned process group to the foreground. This can be done with tcsetpgrp(3) but it's not quite as simple as that. By default background processes that try to write to the terminal will be sent the SIGTTOU signal. The default action for this signal is to stop the process (just as it is when you hit Ctrl-Z to suspend a process). tcsetpgrp counts as terminal output so my newly created child process just stopped as soon as I called it. In order to stop this happening I needed to arrange to ignore that signal for the duration of the call.
After the spawned process is complete I needed to put my process group back into the foreground again. Again I had to protect myself against being stopped by SIGTTOU.
The following program shows all this at work. The error handling is not too hot.
See also:
Edit: 2008/01/11 Fixed angle brackets in source code.
In order to solve these problems I was forced to revisit stuff that I'd read about long ago but not fully understood the implications of at the time. Thanks are due to Zefram for pointing me in the right direction.
Both processes die because they are in the same process group. When the user hits Ctrl-C a SIGINT signal is sent to all processes in the active process group. The signal is not sent to the shell that started my application because the shell arranged for me to be in a new process group (by a means not dissimilar to that below).
Process groups have a group leader - in fact it is the process ID of the group leader that is used as the process group ID.
So, step one is to make sure that the spawned process runs in its own process group (which will also contain any processes it starts unless it takes specific action to the contrary). This is done by calling setpgid(2).
But unfortunately that is insufficient. When pressing Ctrl-C the SIGINT is still set to the process group that contains my application; therefore I exit leaving the spawned process still running.
In order to explain this properly I need to briefly mention sessions. For the purposes of this explanation you can think of a session as representing a terminal. Each session can have a number of process groups. One of these process groups will be the foreground process group and there may be background process groups. The above behaviour resulted because although I'd placed the spawned process in a different process group that process group was in the background (rather like running it from a shell in the background with &.)
I needed to resolve this problem by moving the spawned process group to the foreground. This can be done with tcsetpgrp(3) but it's not quite as simple as that. By default background processes that try to write to the terminal will be sent the SIGTTOU signal. The default action for this signal is to stop the process (just as it is when you hit Ctrl-Z to suspend a process). tcsetpgrp counts as terminal output so my newly created child process just stopped as soon as I called it. In order to stop this happening I needed to arrange to ignore that signal for the duration of the call.
After the spawned process is complete I needed to put my process group back into the foreground again. Again I had to protect myself against being stopped by SIGTTOU.
The following program shows all this at work. The error handling is not too hot.
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <signal.h>
#include <sys/wait.h>
#include <sys/types.h>
int execfgvp(const char *file, char const * const argv[])
{
pid_t child_pid = fork();
if (child_pid == 0) // We're the child
{
// Create a process group for us
if (setpgid(0, 0) < 0)
exit(126); // Failed to setpgrp
// Become the active process group
signal(SIGTTOU, SIG_IGN);
tcsetpgrp(0, getpid());
signal(SIGTTOU, SIG_DFL);
execvp(file, (char * const *)argv);
// Failed to spawn process
exit(127);
}
else if (child_pid > 0) // We're the parent
{
int status;
if (waitpid(child_pid, &status, 0) < 0)
return -1; // Failed to wait. Pass errno on.
// Make us the foreground process group again.
signal(SIGTTOU, SIG_IGN);
tcsetpgrp(0, getpid());
signal(SIGTTOU, SIG_DFL);
if (WIFEXITED(status))
return WEXITSTATUS(status);
return -1;
}
else
return -1; // Fork failed. Pass errno on.
}
int main()
{
const char *argv[] = { "ping", "localhost", NULL };
if (execfgvp(argv[0], argv) < 0)
{
fprintf(stderr, "Failed to start process: %m\n");
return 1;
}
printf("Process finished. Returned to foreground.\n");
printf("Press a key to exit.\n");
getchar();
return 0;
}
See also:
- Linux System Programming, Robert Love (O'Reilly 2007) Chapter 5 pp154-159.
- Advanced Programming in the UNIX Environment, W. Richard Stevens (Addison-Wesley 1992).
- setsid(2), setpgrp(2), tcsetpgrp(3)
Edit: 2008/01/11 Fixed angle brackets in source code.
Subscribe to:
Posts (Atom)