Welcome to Bits By The Pound! You have reached a page where technology enthusiasts can talk about their passion.


This note explains how to use a WebDAV drive using Windows 7. In particular, this note addresses issues that arise when the WebDAV drive enforces SSL, using a self-signed certificate, and basic authentication. All the steps detailed below are performed on a Windows 7 computer where the Web drive is intended to be used.

This note relates to:

  • Windows 7 client
  • WebDAV drive served by Apache 2.2


Adjust Registry

This step is required if the WebDAV drive enforces user authentication using HTTP “Basic Auth”.

Start the program “regedit.exe” using the following steps:

  • Press “Start” button
  • Choose “Run”
  • Enter “regedit.exe” and press enter

In the regedit program, navigate down the tree using the following path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters

Within the “Parameters” section, set the value of setting “BasicAuthLevel” to 2. The meaning of the values are:

  • 0 – Basic authentication disabled
  • 1 – Basic authentication enabled for SSL shares only
  • 2 or greater – Basic authentication enabled for SSL shares and for non-SSL shares

Close the regedit program.

Reboot Windows.

Import Self-Signed Certificate

This steps imports the WebDAV certificate to the store of trusted certificate. This step is necessary only if the certificate
is self-signed.

Open Internet Explorer as an administrator. Only IE will work for this step (Firefox and Chrome are not helping for this situation).
To open IE as an administrator, press on the “Start” button and find the Internet Explorer entry. Right click on the Internet Explorer entry in the start menu and choose “Run as administrator”.

  • In IE, browse to the site where the WebDAV is stored. It should be a URL that looks like “https://www.mycompany.com/dav/”. If the certificate has not been previously accepted, a dialogue box opens warning the user that the certificate is not trusted. Choose to “Continue to this website (not recommended)”.
  • Once the page of the WebDAV is displayed, the address bar will contain a tab titled “Certificate Error”. Click on this tab and choose “View Certificates”.
  • Click on the “Install Certificate…” button.
  • Choose the option “Place all certificates in the following store” and click on the “Browse…” button
  • Select “Trusted Root Certification Authorities” folder and press “OK” button
  • Click on “Next” and then “Finish”
  • Accept warnings

Once the certificate is installed, IE can be dismissed since it is no longer used.

Mount WedDAV drive

Mount the WebDAV drive using the following steps:

  • Start Windows Explorer (Start > All Programs > Accessories > Windows Explorer)
  • Right-click on “Computer” icon and select “Map Network Drive”
  • Select a letter for your drive
  • Enter the URL for the WebDAV drive (https://www.mycompany.com/dav/) in the field titled “Folder”
  • Select “Connect using different credentials”
  • Press “Finish” button
  • Enter user name and password

At this point, the WebDAV drive should be accessible like any other drive.


This note explains how to send an e-mail message using JavaMail, including using an SSL connection or the STARTTLS built-in security.

This note applies to:

  • Java version 1.7
  • JavaMail version 1.4.1

If using Maven, the JavaMail artifact can be obtained using the following dependency:


Often, a properties file is loaded to configure the SMTP transport layer. This allows a user to configure outgoing mail server without having to change the code. Here is a code excerpt to obtain a session and

    import javax.mail.Authenticator;
    import javax.mail.Message;
    import javax.mail.Multipart;
    import javax.mail.PasswordAuthentication;
    import javax.mail.Session;
    import javax.mail.Transport;

    public void send(String to, String from, Properties mailProperties) throws Exception {

        // Check for user name and password
        String userName = null;
        String userPassword = null;
        String prot = mailProperties.getProperty("mail.transport.protocol",null);
        if( null != prot ){
            userName = mailProperties.getProperty("mail."+prot+".user",null);
            userPassword = mailProperties.getProperty("mail."+prot+".password",null);
        // Create session
        Session mailSession = null;
        if( null != userName && null != userPassword ) {
            final String name = userName;
            final String pw = userPassword;
            Authenticator auth = new Authenticator(){
                protected PasswordAuthentication getPasswordAuthentication() {
                    return new PasswordAuthentication(name,pw);
            mailSession = Session.getInstance(mailProperties, auth);
        } else {
            mailSession = Session.getInstance(mailProperties);
        // Create a default MimeMessage object.
        MimeMessage message = new MimeMessage(mailSession);
        // Set "from" address
        message.setFrom(new InternetAddress(from));
        // Set "to" address
        message.addRecipient(Message.RecipientType.TO, new InternetAddress(to));
        // Subject
        // Body
        message.setText("Hello World");
        // Send message

Plain Text

For plain text access to the SMTP service, the following properties can be used:



For sending a message using the STARTTLS built-in security, the following properties should be used:



For sending a message using SMTP over SSL, the following properties should be used:



There is a property which sets the “debug” mode for JavaMail. It can be set with the following properties:


This note explains how to set up Eclipse for developing images for the EZ430-F2013. This note assumes that the development environment is set up in Ubuntu 12.10.

This note relates to:

  • Ubuntu 12.10
  • EZ430-F2013
  • msp430-gcc version 4.6.3
  • Eclipse Classic version 4.2.1


Install Command-Line Tools

The installation of command-line tools to support the EZ430-F2013 development kit is covered in a different note. Please refer to the note Use EZ430-F2013 in Ubuntu to complete this step.

Install CDT

In Eclipse, the packages that support C/C++ development are named the C Development Toolkit, or CDT. Portions of the CDT are required to enabled a development environment for the EZ430-F2013:

  • C/C++ Development Tools
  • C/C++ Development Tools SDK
  • C/C++ Debugger Services Framework (DSF) Examples
  • C/C++ GDB Hardware Debugging

To install the CDT, in Eclipse:

Create a C project to use MSP-GCC

Create a C project:

  • “File” > “New” > “Project…” > “C/C++ / C Project”
  • Enter a project name
  • Use default location
  • Project Type: Executable > Empty Project > Linux GCC
  • Press Finish

Set up new project to use MSP-GCC:

  • Right-click on project, select “Properties”
  • Select page “C/C++ Build” > “Settings”
  • Select configuration “[ All configurations ]“
  • Select settings “Tool Settings” > “GCC C Compiler”
    • Command: msp430-gcc -mmcu=msp430f2013
  • Select settings “Tool Settings” > “GCC C Compiler” > “Includes”
    • Add include path: /usr/msp430/include
  • Select settings “Tool Settings” > “GCC C Compiler” > “Optimization”
    • Optimization Level: “Optimize for size (-Os)”
  • Select settings “Tool Settings” > “GCC C Linker”
    • Command: msp430-gcc -mmcu=msp430f2013 -Wl,-Map=${BuildArtifactFileBaseName}.map
  • Select settings “Tool Settings” > “GCC C Linker” > “Libraries”
    • Add library search path: /usr/msp430/lib
  • Select settings “Tool Settings” > “GCC Assembler”
    • Command: msp430-as
  • Select settings “Tool Settings” > “GCC Assembler” > “General”
    • Add include path: /usr/msp430/include
  • Select settings “Build Artifact”
    • Artifact type: “Executable”
    • Artifact extension: “elf”
  • Select settings “Binary Parsers”
    • select “Elf Parser”
  • Press “Apply” and “OK”

Once the project is set up correctly, it should be possible to build an image from source by right-clicking on the project and selecting “Build Project”. In the console, the output of the build should look something like the following:

00:00:00 **** Build of configuration Debug for project XXX ****
make all
Building file: ../main.c
Invoking: GCC C Compiler
msp430-gcc -mmcu=msp430f2013 -I/usr/msp430/include -Os -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o "main.o" "../main.c"
Finished building: ../main.c
Building target: xxx.elf
Invoking: GCC C Linker
msp430-gcc -mmcu=msp430f2013 -Wl,-Map=xxx.map -L/usr/msp430/lib -o "xxx.elf"  ./main.o  
Finished building target: xxx.elf

00:00:00 Build Finished (took 166ms)

Set Up Debugger Configuration

To debug the project directly from the IDE, a debugger must be configured. The following steps are used to configure a debugger to use the command-line tool “msp430-gdb”:

  • Select “Run” > “Debug Configuration…”
  • Select “GDB Hardware Debugging”
  • Press the “New” icon
  • Set name
  • Tab “Main”:
    • Project: select project
  • Tab “Debugger”:
    • GDB Command: msp430-gdb
    • Set “Use remote target”
    • JTAG Device: Generic TCP/IP
    • Host name or IP address: localhost
    • Port number: 2000
  • Tab “Startup”:
    • Reset and Delay: 3
    • Set: “Halt”
    • Initialization Commands: “monitor erase”
    • Set: “Load image”
    • Set: “Use project binary”
    • Set: “Load symbols”
    • Set: “Use project binary”
  • Tab “Common”:
    • Save as: “Shared file” (select project)
    • Display in favorite menu: “Debug”
  • Press “Apply”

Running the debugger from the IDE is the equivalent to running “msp430-gdb” at the command-line. It first requires that “mspdebug” runs to bridge to the EZ430-F2013 development kit. This can be perform from a terminal or from an Eclipse run configuration (next section).

To run the debugger, select from the menu “Run” > “Debug Configurations…”, choose the debugger create and press the button “Debug”.

Run MSPDEBUG from Eclipse

It is possible to run “mspdebug” from Eclipse using a run configuration. Here are the steps to set it up:

  • Select “Run” > “External Tools” > “External Tools Configurations…”
  • Right-click on “Program” and select “New”
  • Enter name: MSPDebug
  • Select “Main” tab:
    • Location: /usr/bin/mspdebug
    • Arguments: uif -d /dev/ttyUSB0 gdb

Once set up as an external tool, to run “mspdebug”, select “Run” > “External Tools” > “External Tools Configurations…”, click on the program called “MSPDebug” and press the “Run” button.


There are many short-cuts in Eclipse to run external tools and debug configurations without using the tedious menus. One of these short-cuts are located on the main tool bar. These short-cuts makes it very convenient to start “mspdebug” and “msp430-gdb”. The ability of developing and debugging using the EZ430-F2013 directly from Eclipse greatly reduces the development cycle of an image targeted for the MSP430.


Use EZ430-F2013 in Ubuntu

This note explains how to install the command-line tools in Ubuntu to compile, load and run images destined for the Texas Instrument EZ430-F2013.

The TI EZ430-F2013 is a USB development environment for small dongles that host a TI MSP430 micro-controller (MSP430-F2012 and MSP430-F2013).

This note relates to:

Install Packages

The packages required to work with the MSP430 are available from the main repositories.

sudo apt-get install binutils-msp430 gcc-msp430 gdb-msp430 msp430-libc msp430mcu mspdebug

Verify Drivers for EZ430-F2013

Insert the EZ430-F2013 in a USB port. Then, run “dmesg” to verify that it was detected correctly:


The output should end with a set of lines that looks as follows:

[47661.708176] usb 5-1: new full-speed USB device number 8 using uhci_hcd
[47661.905243] usb 5-1: New USB device found, idVendor=0451, idProduct=f430
[47661.905253] usb 5-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[47661.905260] usb 5-1: Product: MSP-FET430UIF JTAG Tool
[47661.905266] usb 5-1: Manufacturer: Texas Instruments
[47661.905272] usb 5-1: SerialNumber: TUSB3410572A43E3DB45FFB1
[47661.909238] ti_usb_3410_5052 5-1:1.0: TI USB 3410 1 port adapter converter detected
[47662.492172] usb 5-1: reset full-speed USB device number 8 using uhci_hcd
[47662.636193] usb 5-1: device firmware changed
[47662.636240] ti_usb_3410_5052: probe of 5-1:1.0 failed with error -5
[47662.636365] usb 5-1: USB disconnect, device number 8
[47662.804100] usb 5-1: new full-speed USB device number 9 using uhci_hcd
[47663.029224] usb 5-1: New USB device found, idVendor=0451, idProduct=f430
[47663.029233] usb 5-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[47663.029240] usb 5-1: Product: MSP-FET430UIF JTAG Tool
[47663.029246] usb 5-1: Manufacturer: Texas Instruments
[47663.029252] usb 5-1: SerialNumber: TUSB3410572A43E3DB45FFB1
[47663.032921] ti_usb_3410_5052 5-1:1.0: TI USB 3410 1 port adapter converter detected
[47663.032950] ti_usb_3410_5052: probe of 5-1:1.0 failed with error -5
[47663.037227] ti_usb_3410_5052 5-1:2.0: TI USB 3410 1 port adapter converter detected
[47663.037509] usb 5-1: TI USB 3410 1 port adapter converter now attached to ttyUSB0

The report from “dmesg” shows that the development kit was mounted to /dev/ttyUSB0. For the remainder of the note, this value is assumed. However, you might need to adjust this value to fit your particular installation.

ls /dev/ttyUSB*

Writing a program for MSP430-F2013

Create a new source file.

gedit sos.c

Paste the following code:

#include <msp430f2013.h>

#define DIM(x) (sizeof(x)/sizeof(x[0]))
#define DOT  {10000, 1}
#define DASH {30000, 1}
#define SP   { 8000, 0}
#define LTR  {16000, 0}
#define WRD  {80000, 0}


typedef struct _Segment {
    unsigned long delay;
    unsigned char state;
} Segment;

Segment segments[] = {

int main(void)
    WDTCTL = WDTPW + WDTHOLD;                 // Stop watchdog timer
    P1DIR |= 0x01;                            // Set P1.0 to output direction

    int sequence = 0;

    for (;;)
        volatile unsigned long i;

        Segment segment = segments[sequence];

        if( segment.state ) {
            P1OUT |= 0x01; // Turn on LED
        } else {
            P1OUT &= 0xfe; // Turn off LED

        i = segment.delay; // Delay
        do (i--);
        while (i != 0);

        // Go to next segment
        if( sequence >= DIM(segments) ){
            sequence = 0;

    return 0;

The code above drives the LED found on the MSP430-F2013 to emit “SOS” in Morse code. A number of definitions are already declared in the file “msp430f2013.h”. Many header files relating to the MSP430 can be found in the directory /usr/msp430/include.

Compiling and Linking Program

Compiling and linking programs/images for the MSP430 is done by using “msp430-gcc”, which is a version of “gcc” that targets the MSP430 micro-controller. Most options available for “gcc” are also available for “msp430-gcc”.

Compiling using “msp430-gcc” yields object files, as with “gcc”. However, invoking the linker produces *.elf files, which can be loaded on the MSP430-F2013 via the development kit (EZ430-F2013).

Following the example started above, the two lines required to produce an object file from the source code, and then link into an image:

msp430-gcc -c -mmcu=msp430f2013 -g -Os -Wall -Wunused -IInclude -o sos.o sos.c
msp430-gcc sos.o -mmcu=msp430f2013 -Wl,-Map=sos.map  -o sos.elf


A command line utility called “mspdebug” is used to perform most operations that involves the EZ430-F2013 development kit.

First, insert the EZ430-F2013 in a USB port and verify that it is mounted to /dev/ttyUSB0 (see steps above).

When invoking “mspdebug”, one must specify the device that the EZ430-F2013 is mounted as.

mspdebug uif -d /dev/ttyUSB0

This command opens a shell where a number of commands can be sent to the development kit. To leave the shell, press CTRL-D.

Loading Image on MSP430-F2013

Loading an image on an instance of MSP430-F2013 via the development kit is accomplished by using “mspdebug”:

mspdebug uif -d <device> "prog <elf-file>"


  • <device> refers to the device file name where the USB development kit is mounted
  • <elf-file> refers to the ELF file produce by the msp430-gcc linker

Following the example above:

mspdebug uif -d /dev/ttyUSB0 "prog sos.elf"

Debugging MSP430

Via the EZ430-F2013, one can debug the program loaded in the MSP430, including single-step execution, via the “gdb” debugger provided by the MSP430 packages. The command-line utility “msp430-gdb” is similar to “gdb” but designed for the MSP430 micro-controller. However, “mspdebug” is needed to bridge “msp430-gdb” to the MSP430 via the EZ430-F2013 development kit.

The command “gdb” within the shell provided by “mspdebug” start a server that listens to commands from “msp430-gdb” and forwards them to the EZ430-F2013 development kit. When “msp430-gdb” connects to “mspdebug”, it provides a shell similar to the one provided by “gdb”.

The command required to start the server is:

mspdebug uif -d <device> gdb

In a different terminal, attach the debugger with the following command:

msp430-gdb <elf-file>

The commands available in “gdb” are also available in “msp430-gdb”:

  • To execute program: continue
  • To stop program: CTRL-C
  • To list stack variables: info locals
  • To list global variables: info variables

To continue the example started above:

mspdebug uif -d /dev/ttyUSB0 gdb &
msp430-gdb sos.elf

Debug using X Debugger: ddd

If X is available, one can use “ddd”, which is a GUI application to simplify the use of gdb.

sudo apt-get install ddd

To use “msp430-gdb” via “ddd”, one must first start “mspdebug” and then invoke “ddd” by specifying “msp430-gdb”. To continue the example in this note:

mspdebug uif -d /dev/ttyUSB0 gdb &
ddd --debugger msp430-gdb sos.elf


Using the TI EZ430-F2013 development kit with an Ubuntu platform is a snap since all the tools required are readily available in the main repositories. Furthermore, anyone familiar with gcc and gdb can easily transition to the line of tools designed for the MSP430 since the commands and options are almost the same.


Fix magic for MP4 MSNV in Ubuntu 12.04

In Ubuntu, as in other flavours of Linux, utilities like “file” detect the type of files by using an algorithm based on “magic numbers”. This algorithm looks for fixed fields in the examined file. Lately, some of these utilities have failed in detecting the proper type for some encountered MP-4 files which do not contain the “magic numbers” expected of a MP-4 file.

This note presents a method to update the magic files to detect new file formats. More specifically, the method concentrates on updating the magic files to recognize the new MP-4 format.

This note relates to:

  • Ubuntu 10.04
  • Ubuntu 12.04
  • MP-4 format, major brand MSNV


This note uses a number of examples to illustrate the fix. In these examples, two files are used, a MP-4 file recognized by the file utility named “known.mp4″; and, a MP-4 file not recognized by the utility named “unrecognized.mp4″.

Exploring the problem

For the recognized file, the following sequence is experienced:

file -bnr --mime known.mp4
video/mp4; charset=binary

As seen above, the format of the file is recognized correctly and the mime-type is return appropriately. Here is the sequence for the file which is not recognized correctly.

file -bnr --mime unrecognized.mp4
application/octet-stream; charset=binary

In the case of the file which is not recognized, a generic mime-type is returned.

Differences between files

Programs from the FFMPEG or libav projects can be used to find out details of video files. Using avprobe (or ffprobe), it is possible to get details from each file and examined the differences.

avprobe known.mp4
avprobe version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2007-2012 the Libav developers
  built on Jun 12 2012 16:37:58 with gcc 4.6.3
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'gwichin/media/conv391580280152567226.mp4':
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 1970-01-01 00:00:00
    date            : 2012-03-06T15:55:04-0500
    encoder         : Lavf53.3.0
  Duration: 00:00:10.14, start: 0.000000, bitrate: 184 kb/s
    Stream #0.0(und): Video: h264 (High), yuv420p, 320x240, 132 kb/s, 29.97 fps, 29.97 tbr, 30k tbn, 59.94 tbc
      creation_time   : 1970-01-01 00:00:00
    Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 46 kb/s
      creation_time   : 1970-01-01 00:00:00
avprobe unreconized.mp4
avprobe version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2007-2012 the Libav developers
  built on Jun 12 2012 16:37:58 with gcc 4.6.3
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'upl5976181052131867323.mp4':
    major_brand     : MSNV
    minor_version   : 19464262
    compatible_brands: MSNVmp42isom
    creation_time   : 2012-01-06 19:20:14
  Duration: 00:00:36.16, start: 0.000000, bitrate: 9614 kb/s
    Stream #0.0(jpn): Video: h264 (Main), yuv420p, 1280x720, 9490 kb/s, 30 fps, 30 tbr, 30k tbn, 60k tbc
      creation_time   : 2012-01-06 19:20:14
    Stream #0.1(jpn): Audio: aac, 44100 Hz, mono, s16, 128 kb/s
      creation_time   : 2012-01-06 19:20:14

By looking at these reports, it becomes evident that although both files are reported as MP-4, they are not reporting the same major brand.

Detailed Differences

The “od” utility can be used to look at the byte information from each files. This information is necessary to craft the magic file needed to recognized the new format.

od -t x1 -t a known.mp4 | less
0000000  00  00  00  20  66  74  79  70  69  73  6f  6d  00  00  02  00
        nul nul nul  sp   f   t   y   p   i   s   o   m nul nul stx nul
0000020  69  73  6f  6d  69  73  6f  32  61  76  63  31  6d  70  34  31
          i   s   o   m   i   s   o   2   a   v   c   1   m   p   4   1

A different file header is observed from the file that is not recognized:

od -t x1 -t a unrecognized.mp4 | less
0000000  00  00  00  1c  66  74  79  70  4d  53  4e  56  01  29  00  46
        nul nul nul  fs   f   t   y   p   M   S   N   V soh   ) nul   F
0000020  4d  53  4e  56  6d  70  34  32  69  73  6f  6d  00  00  00  94
          M   S   N   V   m   p   4   2   i   s   o   m nul nul nul dc4

In the case of the recognized file, the bytes detected are from offset 4 to 11, where “ftypisom” is recognized as belonging to a file of type MP-4. The same bytes in the file not recognized are “ftypMSNV”. The crafting of the magic numbers to recognize this new file type must be based on this latter string.

Updating Magic Numbers

This note does not explain the format of the magic number files. This topic is covered in other pages. See the references above.

In Ubuntu, updating the file /etc/magic with the proper entries is all that is required for the file utility to detect the new types. There is no need to compile the file, as in other versions of Linux. Nor is there a need to perform any reboot.

To edit the magic number file:

sudo gedit /etc/magic

The following lines should be added to the magic file:

4       string  ftyp    ISO Media
>8      string  MSNV    \b, MPEG v4 system, version 2
!:mime  video/mp4

After the magic number file is update, save and close.


To test that your changes are working, perform a “file” command on the previously not recognized video file:

> file unrecognized.mp4
upl5976181052131867323.mp4: ISO Media, MPEG v4 system, version 2
> file -bnr --mime unrecognized.mp4
video/mp4; charset=binary

If the results above are not observed, then something has gone wrong.


The same problem was encountered on Ubuntu 10.04 and the fix introduced above corrected the problem. I suspect that all intervening versions of Ubuntu between 10.04 and 12.04 suffer the same problem and that the same fix applies.


After upgrading to Ubuntu 12.04 (Precise Pangolin) on a Dell Inspiron 1720, the speakers continued playing even after plugging in a set of headphones. This problem has been reported and fixed in multiple forums. However, the fix requires a model number, which can not be easily guessed. Therefore, this post is meant to help those who have a machine similar to the Dell Inspiron 1720.


This note applies to:

  • Ubuntu 12.04
  • Dell Inspiron 1720

The fix is applied by using the following steps:

  • Edit /etc/modprobe.d/alsa-base.conf and modify the appropriate line
  • Reboot computer

Edit alsa-base.conf

sudo gedit /etc/modprobe.d/alsa-base.conf

Initially, the file contained the following line:

options snd-intel8x0m index=-2

This line must be commented out (in case you want it back) and replaced as follows:

# options snd-intel8x0m index=-2
options snd-intel8x0m index=-2 ac97_quirk=1 buggy_irq=1 enable=1

After saving the file, rebooting the computer is all that is required.


This note explains how to add new public IP addresses, in excess of the first static IP address, to the WAN interface of a DD-WRT router. All public addresses are assigned to computers that reside within the LAN network served by the router. Therefore, network address translation (NAT) is performed between the public addresses and the addresses assigned internally. This notes is written after it was successfully performed on:

  • ASUS RT-N16 Wireless Router
  • DD-WRT v24-sp2 (08/07/10) mega (SVN revision 14896)
  • ISP: TekSavvy


When enquiring to receive a new static IP address from my ISP, I found out that a subnet could be leased for a monthly fee. I selected a subnet containing two IP addresses which was assigned to my internet service. The new addresses were communicated using a slash notation: XXX.XXX.XXX.XXX/30.

Although a /30 subnet might suggest that four distinct addresses are available, there are only two. In fact, each subnet contains two addresses that have special meaning: the first, which is the subnet identifier; and, the last, which is the subnet broadcast address. Therefore, the first address available for assignment in a subnet is the one after the subnet identifier. For example, if the assigned subnet was, then the subnet identifier would be, the broadcast address would be and the available addresses would be and

For simplicity in the following examples, we will assume that the assigned subnet is Also, it will be assumed that the public address is assigned to a computer that is manually set with the LAN address


The solution is two-fold. First, assign one of the new public IP addresses to the router’s WAN interface. Then, use firewall rules to route packets to and from the desired computer within the LAN.

Although the solution calls for entering commands through the router’s web interface, I suggest you test those commands by entering them at the command line using a telnet or SSH session to your router. Once this works well, transcribing the commands to the web interface is a good way to save those changes in case the router is rebooted.

Assign Public Address

Using a web browser, open the web interface to your router. This is usually done by directing your browser to an address similar to:

Direct your browser by selecting the “Administration” tab, followed by the “Commands” sub-tab.

In the text box titled “Commands” under “Command Shell”, enter the commands to assign the public address to the WAN interface. Use the example below as a template and substitute the addresses according to your situation:

/sbin/ifconfig $WANIF:1 netmask broadcast

Once this is entered in the text box, save the changes by pressing the button titled “Save Startup”.

Firewall Rules

To assign the firewall rules, the text box mentioned in the previous step is used. However, when saving the content of the firewall rules, the button titled “Save Firewall” is used instead.

In the firewall rules, one command is used to map the public address to the internal address; one command is used to map the internal address to the public address, and; one command is used to accept each port that should be forwarded.

Use the following template and substitute the appropriate addresses and ports:

/usr/sbin/iptables -t nat -I PREROUTING -d -j DNAT --to
/usr/sbin/iptables -t nat -I POSTROUTING -s -j SNAT --to
/usr/sbin/iptables -I FORWARD -d -p tcp --dport 80 -j ACCEPT
/usr/sbin/iptables -I FORWARD -d -p tcp --dport 22 -j ACCEPT

The above example forwards HTTP (port 80) and SSH (port 22) requests to the internal computer.

Reboot Reouter

For the changes to take effect, the router must be rebooted. Using the router’s web interface, navigate to “Administration” tab and the “Management” sub-tab. Finally, press the button titled “Reboot Router” at the bottom of the page.


Tethering iPhone 5.1 on Ubuntu 11.10

This blog site already features an note on tethering the iPhone to Ubuntu (http://www.bitsbythepound.com/tethering-iphone-on-ubuntu-11-04-397.html). However, since upgrading to iOS 5.1, when the iPhone is connected to the Ubuntu platform, an error is reported stating that “Unhandled Error Lockdown”.

This note applies to:

  • Ubuntu 11.10 (Oneiric)
  • iPhone 5.1


The solution offered by this note is to:

  • Install the package repository offered by Paul McEnery
  • Adjust the package repository to be picked up in oneiric
  • Install the necessary packages from the repository
  • Configure

Install Package Repository from Paul McEnery

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:pmcenery/ppa

Adjust Package Repository for use in Oneiric

I applaud the work done by Paul McEnery in providing us the tools to use the iPhone. Unfortunately, the package repository is not available for Ubuntu 11.10. On the other hand, the package repository for Natty (11.04) seems to work just fine for Oneiric. In this step, the package list is modified to point to the Natty version.

sudo gedit /etc/apt/sources.list.d/pmcenery-ppa-oneiric.list

While editing the file, replace “oneiric” to “natty” and save again. The resulting file should look something like:

deb http://ppa.launchpad.net/pmcenery/ppa/ubuntu natty main
deb-src http://ppa.launchpad.net/pmcenery/ppa/ubuntu natty main

Install Necessary Packages

sudo apt-get update
sudo apt-get install  ifuse libimobiledevice2 libimobiledevice-utils


The first step of configuration is to tether the iPhone to the platform using the USB connector. Then, run the following command:

idevicepair unpair && idevicepair pair

Finally, unplug and re-connect the iPhone. The error should not be reported. Instead, all services from the iPhone should be available.


Since upgrading to Ubuntu 11.10, some games that used to run perfectly using PlayOnLinux started to display unwanted behaviour. In particular, the mouse jumps from the expected position in Starcraft: Broodwar. This behaviour prevents efficient playing. Researching the Internet shows that many other games are plagued by the same problem.

This note applies to:

  • Ubuntu 11.10
  • PlayOnLinux 4.0.16
  • Wine 1.3.18

Many solutions are offered on the Internet which did not work on my system. However, the problem appears fixed in a newer version of Wine. The approach offered in this note is to install Wine 1.4. The latest version of Wine is not yet available via the packages offered by Ubuntu. However, the latest version (beta) is available via the WineHQ’s PPA. The following steps adds a new package repository and updates Wine to the latest version available from this repository. As an interesting note, Wine 1.4 is available via the package named wine-1.3.

sudo apt-get install python-software-properties
sudo add-apt-repository ppa:ubuntu-wine/ppa
sudo apt-get update
sudo apt-get dist-upgrade

After those changes, you can verify the version of the currently installed version of Wine with the following command.

wine --version


Wine and PlayOnLinux are great tools. The complexity of the task performed by those tools is such that one must expected some unstable versions. This note demonstrated how a user can jump into experimental versions of Wine to access solutions earlier than pushed by the standard packages.


I have been using a Solid State Drive (SDD) for some time and I have decided to share my notes on how to do it and references I have followed. The configuration I use is to have two disks attached to my platform: the first is the SSD and holds the Operating System; the second is a magnetic drive which holds my home partition.

This note applies to:

  • Ubuntu 11.04
  • ext4

From research, the important aspects that need to be dealt with when using a SSD are:

  1. Enable TRIM
  2. Disable access timestamp on files
  3. Adjust disk scheduler
  4. Moving log files to RAM drive

Each of these topics are discussed separately below.

Enable TRIM

TRIM is the process under which an operating system tells a drive that a sector is no longer in use. This is not necessary for magnetic drives and, historically, operating systems did not provide this information to the drives as it increased the amount of information traversing between a host and its drives. However, this information is crucial for SSDs as it is needed to perform proper wear leveling.

Information on TRIM can be found via these links:

In Ubuntu, TRIM is enabled via the file “fstab” located in “/etc”. One needs to find the drive associated with the SSD and add the option “discard”. To edit the “fstab”:

sudo gedit /etc/fstab

Then add the “discard” option to the drive concerned. In my example, the root drive is the SSD. Therefore, the resulting file looks like:

# /etc/fstab: static file system information.
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0

# / was on /dev/sdb1 during installation
UUID=c72f9086-c306-42c6-956b-77bd546eff25 /               ext4    discard,noatime,nodiratime,errors=remount-ro 0       1

After the “fstab” file is modified, one must reboot the system for the change to take effect.

After reboot, one should ascertain that TRIM was in fact enabled. To do so, a trick provided by Nicolay Doytchev is used (link above). However, it is claimed that this trick works with ext4. One might not get expected results from other disk format.

1. Become root and move to a directory managed by the SSD:

sudo bash
cd /etc

2. Create a file with random bytes in it:

dd if=/dev/urandom of=tempfile count=20 bs=512k oflag=direct

3. Get and record the disk sector address for the beginning of the file:

hdparm --fibmap tempfile

Note the number under begin_LBA and use it for <ADDRESS> below:

4. Access file data from disk (replace /dev/sdX for the drive you are enabling in fstab):

hdparm --read-sector <ADDRESS> /dev/sdX

The result of this should be random bytes and look like this:

root:/etc# hdparm --read-sector 101193728 /dev/sdb

reading sector 101193728: succeeded
036b f924 492b f3e3 2f36 d68b 2bf5 6eba
a747 2855 136d b22b ffb3 7496 b412 1342
1bd1 0a1a 2427 176f 4a6c f81f a8a4 9d8b
e869 1681 9a25 10f4 ecb4 fe4f 02dd 5290
23bf aec2 8b3c ae70 4f96 9cf6 dc19 ccd7
f112 8984 e01f e8fb c842 fbe9 f57b 4a95
13b7 8a7b 995a 719d 5449 8340 bbe6 5aab
0a0d 3533 fa57 2493 d746 a312 0440 eef5
04ae ff48 b57e 98c9 1a7e 5479 4f51 eaff
7ef4 5025 8242 32c4 beec 6fcd ce98 7522
16e3 d8e2 04da 0d66 bc06 fce6 c434 c376
403d cce0 afdd 4643 4781 8e71 a607 f5dc
ad7c 2fb1 7f00 0aab 19f2 d99e d456 d80d
1fab 2f80 da20 6f8d aced 2ac7 97a2 437e
1240 a07b a80b 12b1 9d35 5028 bcf8 6584
386a 20e0 1955 7ec7 3ce8 7de3 07c1 04d2
fec3 e61d c842 3e6b da74 789e a5cb f7a1
e6bf aff0 9578 bfed 65a3 592a 3d82 0c80
cbcb cc62 6f4b fff5 9d92 f06c 0268 3f78
8b88 edac 9af8 cfba 919c 72c7 8bc6 b25f
195e a4f5 8ba9 ef44 973d 7775 6b19 7566
8885 8003 5338 6ff0 3642 e4c5 a04d a305
f227 475f ddfe ac52 be67 94cd ffea 83f0
f055 4862 7b6b 7219 7df0 a990 98ec c3fd
fc84 de89 47c0 7b83 07c9 ef4d 20b5 b72d
1955 2860 c1a7 2c30 83d9 1dbe 4420 0866
b1af efa8 9a5a dd72 554e 4d8a 80d3 0288
d3b3 d7b1 75ea 9e62 7476 2581 6ec4 3c1c
de08 b66a 6d1e af0d f6e9 c89b 9fb3 c072
94b6 a3b7 d586 a653 b61a c3fc 677f 4337
bfad 0bdf 8602 9ac6 5a47 e559 707e 9914
3b0f 96f8 d4a6 dc20 8d6a 32d0 516a bc5d

5. Delete the file and synchronize the file system so that changes are pushed to the SSD:

rm tempfile

6. Finally, repeat the command from step 4. Reading the same disk sector should yield an empty sector. For example:

root@Mobile-1720:/etc# hdparm --read-sector 101193728 /dev/sdb

reading sector 101193728: succeeded
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000

Disable access time stamps on files and directories

Every time a file is accessed, a time stamp for last access is recorded with the file. It is believed that this information is generally unnecessary and that it is the source of a lot of wear on a SSD. Turning off access time stamps does not disable last modified time stamps, which are crucial to a lot of computer tools.

However, it is possible that some tools you are using rely on last access time stamps. Therefore, I suggest that you disable access time stamps separately than all other changes you make to your system. Turn it off and then leave your system unchanged for a while to see how it performs under this mode. In that period, try out all your tools. Tools that rely on file caching might not do as well as others.

Interesting links about last access time:

Disabling last access time is done by adding the options “noatime” and “nodiratime” for the SSD to the file “fstab” under the directory “/etc”. First, edit the “fstab” file:

sudo gedit /etc/fstab

Then, locate your drive and add the options “noatime” and “nodiratime” at the appropriate line. For example, my file looks like:

# /etc/fstab: static file system information.
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    nodev,noexec,nosuid 0       0

# / was on /dev/sdb1 during installation
UUID=c72f9086-c306-42c6-956b-77bd546eff25 /               ext4    discard,noatime,nodiratime,errors=remount-ro 0       1

After the file is saved, one must reboot the operating system for the changes to take effect.

After reboot, you can verify that the recording of access time is not longer in effect with the following tests:

1. Move to a directory managed by the SSD and writable to your user. If your home directory is on the SSD, then the following would work:


2. Create a file for testing purposes:

echo hello > hello.txt

3. Record the last access time:

ls -lu hello.txt

4. Wait for a minute or two to elapse (you can use ‘date’) and then read the file:

less hello.txt

5. Repeat command from step 3 and compare the results. If the same time is returned, then “noatime” is in effect. If a newer time is returned the second time, then the operating system is still recording access time for the file.

6. Clean up after yourself and delete the test file:

rm hello.txt

Adjust Disk Scheduler

By default, the CFQ scheduler is used to access a drive. This scheduler is designed for magnetic drives and take into account variables such as seek times. In SSDs, access time is fairly constant. Therefore, there is no need for a complex scheduler and addressing requests on a first-come-first-served basis is adequate. I have used the “noop” scheduler and this is what I demonstrate. However, there is an interesting post at this blog that might convince otherwise.

The trick offered here changes the scheduler after the boot is performed and as the operating system is starting it services. Therefore, the boot process does not benefit of these changes. The post above provides links to enable scheduler at the boot loader.

Myself, I like keeping my boot loader as clean as possible, so I have adopted this trick.

1. Edit the file “/etc/rc.local”:

sudo gedit /etc/rc.local

2. Add the following line by taking care to replace sdX for the proper drive:

echo noop > /sys/block/sdX/queue/scheduler

This change will take effect at the next reboot. However, to save a reboot, one can apply the change right away with (again, replacing sdX for proper drive):

sudo echo noop > /sys/block/sdX/queue/scheduler

Moving log files to RAM

Log files are a source of constant writing to disk. Personally, I have not yet made the change to my drive for two reasons:

  1. I am ambivalent about throwing away the logs since I sometimes rely on them to find causes of crashes. Given I keep up with the latest version of Ubuntu and that I have a set of problematic video drivers, I feel I must keep my logs around.
  2. Of all the solutions offered, I have not been able to make one work to my liking. I find a lot of solutions presented on the web are finicky and prone to errors.

However, my research indicates that this will yield to an earlier retirement of my drive. Therefore, I offer here a link that I found useful on this topic:

Other readings

Here are a couple of interesting links that might guide you in your decisions. I do not personally endorse the views presented there. However, I believe they provide good food for thoughts:


With a SSD drive, the time to boot up has greatly decreased and performance associated with my development tools as increased. Surprisingly, the general temperature reported by my laptop sensors has also gone down. However, as most activities performed on computers are “web based”, the network-bound activities have remained pretty much the same.

All in all, I have been quite satisfied with my purchase of a SSD drive.