I made a change in the blogger configuration to ease the later work when blogging. It is possible that older entries are not correctly formatted.

Monday 29 September 2008

Some Ideas for Desktop Improvements

I have few ideas to improve the desktop presentation to support better my needs. Basically a few things that I find important:

To Do List

I want a to do list which is more or less always present when the desktop is on.
  • classification by priorities
    • important and urgent
    • not important but urgent
    • important and not urgent
    • not important and not urgent
    • not classified
  • classification by subject
    • work
    • administrativ
    • Hobby
    • Family
  • Style of the task and Icons
    • Position of Tasks as Desktop Icons
    • Size of Desktop Icons
I had seen a few months ago a presentation by Mozilla Labs (if I recall well) as well as others on how to improve the display of the desktop. I find this really good. This might not be that complex to implement. But before starting the implementation, I only need to know where the different parts are stored, as well as the process used for making the changes. I am not completely sure whether the information remains in memory or is stored on file. For instance, the position of the different icons is found in the following file: ~/.nautilus/metafiles/file:%2F%2F%2Fhome%2Fusername%2FDesktop.xml

Organized Important Files

I want that the files which I have on my desktop are organized in a meaningful way, for example in thematic and time oriented way. From left to right time oriented, top to bottom thematic. Of course the classification cannot be automatic for the thematic way. Moreover, the time oriented way might not always be relevant.

Friday 26 September 2008

Firefox Plugins

There are a number of useful firefox add-ons:
No Script
an add on to control easily whether scripts (java, javascript, flash) are allowed to be performed. This is a per domain enabling or not.
Download Them All
with this add on, it easier to download many resources from a web page
Download Status Bar
adds a status bar which shows the status of downloading of things from firefox.
Greasemonkey
using greasemonkey allows to write scripts to be performed on top of web pages
Firebug
This is a utility to help in developing javascripts.

Monday 22 September 2008

Hibernate

Hibernate is an OR-Mappping. Mostly adapted the examples from the chapter 2 of Java Persistence with Hibernate
Second Edition of Hibernate in Action
Christian Bauer and Gavin King
November, 2006 | 880 pages
ISBN: 1-932394-88-5
The goal is to map the objects created by a object programming language such as Java to a relational database in order to provide persistent objects, i.e object which can be stored on disk and which do not disappear when the virtual machine shutdowns. Hibernate performs the mapping using configurations files in xml (or other). Here is an example of XML file for a mapping called tasks.hbm.xml:
<?xml version="1.0"?>
 <!DOCTYPE hibernate-mapping PUBLIC
   "-//Hibernate/Hibernate Mapping DTD//EN"
   "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
 <hibernate-mapping>
   <class
     name="mytasks.Task"
      table="TASKS">
    <id
       name="id"
       column="TASK_NAME">
       <generator class="increment"/>
    </id>
    <property
        name="name"
        column="Task_NAME"/>
    <many-to-one
       name="nexttask"
       cascade="all"
       column="SPOUSE_ID"
       foreign-key="FK_NEXT_TASK"/>
   </class>
</hibernate-mapping>
a class like:
class Task {
   private String name;
   private Task nexttask;
}
package mytasks; import org.hibernate.*;
import persistence.*;
import java.util.*;
public class TaskExample {
     public static void main(String[] args) {
         // First unit of work
         Session session =
          
        HibernateUtil.getSessionFactory().openSession();
         Transaction tx = session.beginTransaction();
         Task firsttask = new Task("Learn Hibernate");
         Long meberId = (Long) session.save(firsttask);
         tx.commit();
         session.close();
         // Second unit of work
         Session newSession =                     HibernateUtil.getSessionFactory().openSession();
         Transaction newTransaction =            newSession.beginTransaction();
         List tasks =            newSession.createQuery("from Task m order by m.name asc").list();
         System.out.println( tasks.size() + " Task(s) found:" );
         for ( Iterator iter = members.iterator(); iter.hasNext(); ) {
           Tasks task = (Task) iter.next();
           System.out.println( task.getName() );
         }
         newTransaction.commit();
         newSession.close();
         // Shutting down the application
         HibernateUtil.shutdown();
     }
}
It is possible to set the configuration file for the session factory using new Configuration().configure(<locationof config file>); for example:
SessionFactory sessionFactory = new Configuration()      .configure("/persistence/tasks.cfg.xml")      .buildSessionFactory();
calling new Configuration().configure(); would look for a file called: hibernate.properties in the class path directory outside of any package. the Hibernate Configuration file:
<!DOCTYPE hibernate-configuration SYSTEM "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
     <session-factory>
       <property name="hibernate.connection.driver_class">
         org.postgresql.Driver        </property>
       <property name="hibernate.connection.url">
         jdbc:postgresqll://localhost
       </property>
       <property name="hibernate.connection.username">
         sa
       </property>
       <property name="hibernate.dialect">
         org.hibernate.dialect.HSQLDialect
       </property>
     <!-- Use the C3P0 connection pool provider -->
       <property name="hibernate.c3p0.min_size">5</property>
         <property name="hibernate.c3p0.max_size">20</property>
         <property name="hibernate.c3p0.timeout">300</property>
         <property name="hibernate.c3p0.max_statements">50</property>
         <property name="hibernate.c3p0.idle_test_period">3000</property>
         <!-- Show and print nice SQL on stdout -->
         <property name="show_sql">true</property>
         <property name="format_sql">true</property>
         <!-- List of XML mapping files -->
         <mapping resource="mytasks/tasks.hbm.xml"/>
     </session-factory>
</hibernate-configuration>

Note the use of a certain number of configuration entries:

  • the Hibernate connection pool provider: here the C3PO-connection-pool-provider
  • the hibernate dialect used, here the HSQLDialect
  • the connection information: drivers, url and username
  • the mapping file

There are a number of other possibilities to configure Hibernate.

Antipatterns and Code Problems

Here a few resources for antipatterns, which are patterns of behaviour or architecture which tend to create problems. First of all, there is a well known book on the subject. There is also a web site for the books: http://www.antipatterns.com/. Here another resource about Antipatterns in the software development: AntiPattern in der Softwareentwicklung (Note the text is in German). Also in addition a resource on symptoms which call for refactoring: SmellsToRefactorings.

Tuesday 16 September 2008

Java AWT bug on linux with XCB

When starting my java application on linux, I have the following traceback:

Locking assertion failure. Backtrace:
#0 /usr/lib/libxcb-xlib.so.0 [0xc3e767]
#1 /usr/lib/libxcb-xlib.so.0(xcb_xlib_unlock+0x31) [0xc3e831]
#2 /usr/lib/libX11.so.6(_XReply+0x244) [0xc89f64]
#3 /usr/java/jre1.6.0_03/lib/i386/xawt/libmawt.so [0xb534064e]
#4 /usr/java/jre1.6.0_03/lib/i386/xawt/libmawt.so [0xb531ef97]
#5 /usr/java/jre1.6.0_03/lib/i386/xawt/libmawt.so [0xb531f248]
#6 /usr/java/jre1.6.0_03/lib/i386/xawt/libmawt.so(Java_sun_awt_X11GraphicsEnvironment_initD

It has already been discussed in a number of forums and bug reports:

http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6532373, or https://bugs.launchpad.net/ubuntu/+source/libxcb/+bug/87947

One possible work around seems to be to add the following variable setting when starting the application:

export LIBXCB_ALLOW_SLOPPY_LOCK=1

Monday 15 September 2008

Autotest - WOW they did it!!!!

Welll I was reading as always a little bit of this automated testing approach. And I was again on this page: http://autotest.kernel.org/ Autotest is a software which allows the automated testing of the linux kernel, that is, it is an infrastructure to build, boot and ... linux kernels. It has quite a lot of features:
  • bisection...
  • building
  • booting
  • filesystem check
  • python based library to automate scripts
There is a good presentation on autotest presentation.

PXE Problems with NAS 1000

A good friend of mine gave me a NAS 1000 so that I could try a few things with it. In particular, I wanted to try PXE and diskless solutions with the installation files or disk data on the NAS server.

First I had some troubles starting the atftpd daemon because of user and group information which did not work. I should have checked the messages information right away... duh!!! It would have saved me a lot to time.

But then as I tried getting the data from my linux box using the fedora tftp client, it did not work. Well actually I am still not sure why it is not. Some routing errors obviously:

Jan 1 13:36:21 icybox daemon.info atftpd[1951]: Server thread exiting
Jan 1 13:36:26 icybox daemon.notice atftpd[1952]: Fetching from 192.168.0.104 to ess

Saturday 13 September 2008

Syntax highlighting for the Web

As I was reading an interesting post from Otaku, Cedric's weblog, I learned about the existence of a web syntax highlighting solution: pastebin, which is GPL software. It seems to be based on another GPL software: Genshi. This may prove useful once in a while. Especially if I intend to port my post to another blogging software since I am not completely sure about the user settings of blogger.

Friday 12 September 2008

Java invokeLater

A number of month ago, I took a look at the new features of Java 1.5 and 1.6. And I fell on the new java.util.concurrent package.

Whoever programmed GUI interfaces in Java is certainly aware of the importance of using thread to run in the background in order to enable the user to perform other tasks and not just wait in front of a screen which is not refreshing. Using nice runnable threads, you could have a much more responsive GUI. A typical example was things of the sort:

Thread t = new Thread(){
  public void run(){
   // the task to perform which requires a certain amount of time
  }
};

SwingUtilities.invokeLater(t);

This technique is really fundamental to a well programmed graphical interface.

But since Java 1.5, there are a number of supplementary structures which can be used to perform tasks in parallel. And these are found in the package java.util.concurrent which will be the topic of a future entry.

Overview of Maven

Maven is a tool design to support as many task as possible for the management of a software project.

Its purpose is to provide a simple tool to achieve the following tasks:

  • Builds
  • Documentation
  • Reporting
  • Dependencies
  • SCMs
  • Releases
  • Distribution

A number of good tutorials can be found on maven's guide page.

Archetypes:

In maven there is the possibility to create archetype models of projects. This means that it is possible to create very easily new projects which have a number of templates to start with. This is related to the possibilities of RAILS.

This is performed by issueing the following command:

$ mvn archetype:create -DgroupId=com.mycompany.app -DartifactId=my-app

Project Object Model: POM

There is a concept of project object model somewhat similar to the ant build files.

An example from the website (see this page):

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.mycompany.app</groupId>
  <artifactId>my-app</artifactId>
  <packaging>jar</packaging>
  <version>1.0-SNAPSHOT</version>
  <name>Maven Quick Start Archetype</name>
  <url>http://maven.apache.org</url>
  <dependencies>
      <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>3.8.1</version>
        <scope>test</scope>
      </dependency>
  </dependencies>
</project>

This model is quite easy to understand.

Project File and Directory Structure

The project file and directory structure depends on the archetype chosen to create a new project. There are means of configuring this, see: http://maven.apache.org/guides/mini/guide-creating-archetypes.html.

The build Life cycle

Each possible tasks (e.g. validate, compile, package) may require other to be performed first. This means that there are dependencies between the tasks (like in ant).

Common tasks
  • mvn compile (compile the code)
  • mvn test (test the functionalities of this project)
  • mvn test-compile (compile test classes for this project)
  • mvn package (package the code of this project)
  • mvn clean (clean the builds and task )
  • mvn site (create a template website for the project)
  • mvn idea:idea (create an IntelliJ IDEA descriptor for the project)
  • mvn eclipse:eclipse (create the project description files for eclipse)
Maven Plugins

There are a number of plugins which can be useful for maven. You can add them to the POM file of the project, see: How_do_I_use_plug-ins

A list of plugins can be found there.

SCM Plugin ( Source Code Management plugin)

One of the many pluging is the SCM plugin which offers useful tasks/goals for interacting with a SCM

External Dependencies

There is also the possibility to configure external dependencies.

Deployment

There are also possibilities of deployment if things are configured. For example, the created distribution can be copied and added to a repository using scp. For this, some information about user names, keys and/or passwords have to be configured.

Documentation

There are also some thing to help in the creation of a documentation site using archetypes. See: guide-site.

Thursday 11 September 2008

Linux Links

Some information can be found using: http://www.tuxfinder.com/ There is a guide about kernel development from Jonathan Corbet, well known linux author and editor at LWN.net : https://ldn.linuxfoundation.org/article/everything-linux-kernel

www.google.com/linux and co

Well I just discovered an interesting thing while looking through old papers the existence of a certain number of URLs for specific google search engine: http://www.google.com/linux http://www.google.com/microsoft Though I would love to get a number of info on which other possibilities there are... Is there a list somewhere.

Wednesday 10 September 2008

Using Busybox for serving linux distributtions

I want to use a busybox in order to test the kernel through a PXE installation as well as not requiring the hard disk of my machine which should cut some part of the noise. For this I would install tftp on the busybox... Though it might also work with an NFS or a samba technique... I should check that.

A central Linux Documentation page

As I was looking for a way to submit a patch to the documentation of the kernel about the i386 --> x86 as well as x86_64 change, I came on to an article about the linux documentation, which gave a pointer on the work of Rob Landley at kernel.org/doc. I may take a look at what could be missing tomorrow.

Tuesday 9 September 2008

Useful appendices :-)

I have been reading this book, an excellent book on linux: Wolfgang Mauerer. Linux Kernelarchitektur Konzepte, Strukturen und Algorithmen von Kernel 2.6, Hanser 2004. ISBN 3-446-22566-8 Some of the information I write from this blog have been largely adapted or influenced through the reading of the book. A very useful thing is also, that the book has a web site, with PDF version of the appendices which are not in the book. It is a bit strange but still extremely useful. The book is in german, therefore it will not be useful for everybody. There is also a list of useful Documentation links: http://www.linux-kernel.de/docs/index.html. In particular:
  • Online Documents about Kernel
  • important RFCs (TCP/IP..., Differentiated Services fields)
  • GNU tool information
  • ELF format
  • important documentation from the kernel
So I've got to say this is really a wonderful book on linux. I just happened to learn from the author that he is writing a new, more current version of the book.

Have you looked at JBOSS' projects lately ?

Did you take a look at JBoss lately... It is quite impressive the amount of technologies they have. Well I knew already some of them... But There are other technologies, which I was not aware of. See for example the projects doc page. They have things on Application servers, extension for Rich clients using JSF, Rule engines, remoting mechanisms, Object relational mappings.... You na... No perhaps not... But it is really impressive. So I am going to attack a JBOSS technology serie in this blog. So except blog entries on:
  • JBoss application server
  • rich faces
  • JBoss Remoting
  • hibernate (though there was already one or two entries)
  • JRUNIT a JUNIT extension to test client/server applications

JBoss Remoting

An interesting framework JBoss remoting. There is a demo at: http://docs.jboss.org/jbossas/remoting/demo/JBossRemoting_demo.htm There is also a very good article at: http://www.onjava.com/pub/a/onjava/2005/02/23/remoting.html

Translation Lookaside buffer, aka TLB

in a few words from the wikipedia article: a CPU cache used memory management hardware to improve the speed of virtual address translation.(wikipedia).

Much information comes from this article.

The idea is that CPUs keep an associative memory to cache page table entries (PTEs) of virtual pages which were recently accessed.

When the CPU must access virtual memory, it looks up in the TLB for a number corresponding to the entry to obtain.

If an entry was found (a TLB hit), then the CPU can use the value of of the PTE which accessed and calculate the physical address.

In case it was not found (a TLB miss), then depending on the architecture, the miss is handled:

  • through hardware, then the CPU tries to walk the page table and find the correct PTE. if one is found the TLB is updated, if none is found then the CPU raises a page fault, which is then treated by the operating system.
  • through software, then the CPU raises a TLB miss fault. The operating system intercepts the miss fault and invoke the corresponding handler, which walks the page. if the PTE is found, it is marked present and the TLB is updated. if it not present, the page fault handler is then in charge.

Mostly, CISC (IA-32) use hardware, while RISC (alpha) use software. IA-64 uses an hybrid approach because the hardware approach is faster but less flexible as the software one.

Replacement policy

If the TLB is full, some entries must be replaced. For this depending on the miss handling strategy, different strategies and policy exist:

  • Least recently used (aka LRU)
  • Not recently used (aka NRU)
though the TLB miss mechanism is implemented in software, the replacement strategy can be implemented using hardware. This is performed by a number of new architectures: IA-64...

Ensure coherence with page Table

Another issue is to keep the TLB coherent with the page table it represents.

Monday 8 September 2008

Nice little tool isoinfo

I am working on havin a simple ram based distribution to test a few things at a test system. For this, I learnt from this page, that there exists a command to extract directly some files from iso without mounting the filesystem. This command is isoinfo and is used in the following way:

$ isoinfo -i isofilesystem.iso -J -x /filetobeextracted > filereceivingtheextracteddata

Nice!

Sunday 7 September 2008

Kudos to helpwithpcs.com

I found a very nice and simple course on the basic knowledge of computer architectures: helpwithpcs.com. I collected very simple reminders of things I already knew... But it is good to take a look at what you might not know or might have missed such as: It is simple and very well explained...

Wednesday 3 September 2008

Read Copy Update

Read Copy Update (aka RCU) is another synchronisation mechanism in order to avoid reader writer locks.

An excellent explanation can be found at the LWN.net in three parts by Paul McKenney and Jonathan Walpole:

The basic idea behind it is that when a resource is modified, a new updated structure is put in its place and the old structure is not discarded right away, it waits until references to this structure by other processes are dropped. It can be seen as similar to the concept of garbage collection, but as noted in What is RCU? Part 2: usage, the old structure is not discarded automatically when there are no references any more and the programmer must indicate the critical read portions of the code.

There is an interesting page on the RCU argueing that this technique is used more and more in the kernel as a replacement for the reader writer locks.

Tuesday 2 September 2008

Kernel Locking mechanisms

An important aspect of programming in an environment with threads and processes is to prevent the different processes to interfer with the functionalities of other processes at the wrong time.

In linux, a number of methods are used to ensure that the data or code section of processes is not disturbed by others. These methods are:

  • atomic operations
  • spinlocks
  • semaphore
  • reader writer locks

These locks and mechanisms are in the kernel space. Other methods or locking mechanisms are used in the user space.

atomic operations

The idea behind atomic operations is to perform very basic changes on variable but which cannot be interfered by other processes, because they are so small. For this, special data type is used called: atomic_t.

On this data type, a number of atomic operations can be performed:

functiondescription
atomic_read(atomic_t *v) read the variable
atomic_set(atomic_t *v, int i) set the variable to i
atomic_add(int i, atomic_t *v)add i to the variable
atomic_sub(int i, atomic_t *v) substract i to the variable
atomic_sub_and_test(int i, atomic_t *v) substract i to the variable, return true value if 0 else return false
atomic_inc(atomic_t *v) increment the variable
atomic_inc_and_test(atomic_t *v)increment the variable, return true value if 0 else return false
atomic_dec(atomic_t *v) decrement the variable
atomic_dec_and_test(atomic_t *v)decrement the variable, return true value if 0 else return false
atomic_add_negative(int i, atomic_t *v)add i to the variable, and return true if its value is negative else false

Note that I discussed in another post the local variables for CPUs.

spinlocks

This kind of locking is used the most, above all to protect sections for short periods from access of other processes.

The kernel checks continuously whether a lock can be taken on the data. This is an example of busy waiting.

spinlocks are used in the following way:

spinlock_t lock = SPIN_LOCK_UNLOCKED;
...
spin_lock(&lock);
/** critical operations */
spin_unlock(&lock);

Due to the busy waiting, if the lock is not released... the computer may freeze, therefore spinlocks should not be used for long times.

semaphores

Unlike linux spinlocks, the kernel sleeps while waiting for the release of the semaphore. Contrary to spinlocks, this kind of structure should only be used for locks which have a certain length, while for short locks using linux spinlocks is recommended.
DECLARE_MUTEX(mutex);
....
down(&mutex);
/** critical section*/
up(&mutex);

The waiting processes then sleep in an uninterruptable state to wait for the release of the lock. The process cannot be woken up using signals during his sleep.

There are other alternatives to the down(&mutex) operation:
  • down_interruptible(&mutex) : the process can be woken up using signals
  • down_trylock(&mutex): if the lock was successful then the process goes on and does not sleep

For the user space, there are also futexes.... But this is another story.

reader writer locks

Using this kind of locks, processors can read the locked data structure but when the structure is to be written the structure can only be manipulated by one processor at a time.

Monday 1 September 2008

GIT tutorial

I was having a look at the git tutorial. The important tasks: 1/ download the code of the linux kernel from git:

>git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git linux-2.6

2/ pulling new code from git:

> git pull

3/ reverse code changes

> git checkout -f

4/ commiting the modifications:

> git commit -a

5/ undo last commits (note it is different from a revert which consists of a patch reverting some other patch)

> git reset HEAD~2

6/ list branches

> git branch

7/ create branch

> git checkout -b my-new-branch-name master

8/ choose a branch and make it the current one:

> git checkout branch

9/ Tell which branch is current

> git status

10/ merging code into a branch mybranch

> git checkout mybranch

> git merge anotherbranch