I made a change in the blogger configuration to ease the later work when blogging. It is possible that older entries are not correctly formatted.

Showing posts with label linux. Show all posts
Showing posts with label linux. Show all posts

Saturday, 29 June 2013

GDM Configuration

Well I learned something about the GDM configuration by reading the well written gnome gdm manual. However, that did not solve my Gnome cofiguration problems.

But at least I know a little bit more how to configure the user screen and put useful on it through the use of an Xwilling script.

Friday, 28 June 2013

Systemd und service control under fedora

I must say I am currently a bit lost in the different mechanisms to start and stop services. I used to perform these tasks with the standard /etc/init.d/... scripts. But it seems that it does not work completely in this way. There seems to be now systemctl, service and chkconfig commands.

service sshd start

to start the ssh daemon.

service sshd stop

to stop the ssh daemon.

chkconfig sshd on

to enable the ssh daemon so as to start it after boot.


To use systemd, the following commands come handy:

systemctl start sshd.service

to start the ssh daemon.

systemctl stop sshd.service

to stop the ssh daemon.

systemctl enable sshd.service

to enable the ssh daemon so as to start it after boot.




Saturday, 9 June 2012

Gnome 3.0 ARRRH!#"§$#!!!

Rant and perhaps solution

I finallly reinstalled fedora. So now I have a completely new Fedora version and I am not amused. Really not amused. First I had trouble with Grub, then I had trouble with the DNS and finally I had to curse against gnome 3.0. Or is it only unity... I am not sure. I do not rant. But this time, I am more than annoyed. I would not say I am pissed off. But nearly. I keep asking myself: why? Why? Why?


First of all Grub... Actually I did not want to reinstall the boot directory from Grub. I wanted to be able to use my old fedora if I needed it. But that did not really work. So I let the new fedora (or I should say anaconda, overwrite the boot loader).


Then, for some reason the DNS was not configured and I had to reconfigure it per hand. Why ?


Finally, how is gnome 3.0 supposed to be used. Why can't I create my desktop icons ? Why can't I add launchers to the panel or the desktop. My computer is supposed to be used. I do not care really how it looks.


I just found the solution in this post for at least some of my problems

Monday, 8 March 2010

commandlinefu

I bought a number of linux magazine during my week holiday and discovered a number of new interesting, useful things:

  • Ubuntu One, a way to store data on some ubuntu servers to share it with other ubuntu users (I must check that)
  • Writer's Cafe, a tool helping creative writing
  • impro-visor, some GPL software working with lead sheets.

And one thing I really found useful is: commandlinefu (http://www.commandlinefu.com/). This site provides a way for people to advertise useful commandline tricks. I should probably follow them, I will learn a lot. And it already started.

I already learnt a new useful command: youtube-dl. Say you have some google URL:

http://www.youtube.com/watch?v=1504cSBhWG0

And you want to get the url of the file being downloaded, then you can use (-b for best quality and -g for just getting the url): $> youtube-dl -b -g http://www.youtube.com/watch?v=1504cSBhWG0

Of course this command has a certain number of options.

Thursday, 18 February 2010

Desktop recording Suse

I have just tried a desktop recording software under OpenSuse 11.2: recordmydesktop. It really worked as a charm for the video part even with thesound.

Sunday, 20 September 2009

Fedora 11 installed

I installed Fedora 11. I still need to see what the differences are and see which software I need to install this time. The advantage of only updating (which did not really work for me from version 9 to 10).

The main new things seems to be that livna has more or less been replaced by rpmfusion. So I still have to see whether I can still install all the software I need for my own pleasure.

I will probably comment in this entry.

To install linux I used the CD from PCWELT Linux 5/2009.

LVM and partitioning

My computer is partitioned in the way that the /boot partition is under ext3 outside the lvm. But the '/' partition is under an lvm partition. In that way I can easily resize and create new partition. I used a new partition so that I should be able to come back to my older linux version, though I have not tested that yet.

One of the important change in fedora 11 (and perhaps fed 10) is the use of ext4.

One of my colleague is somewhat sceptical of the use of partitioning though I must say with lvm it is a real pleasure, but OMMV (our mileage may vary).

Ctrl+ Alt+ Backspace

In the new version of Xorg, the default configuration does not allow you to kill the X server with the keys: Ctrl+ Alt+ Backspace. Whether I find that good or not. Well I ain't so sure.

Desktop background

I really like the new background for the desktop. I suppose I might keep it some time. I really like the birds. Though searching with google there seems to be a lot of cool backgrounds: e.g http://www.wallpaperlinux.com/v/Fedora/Fedora+11+Linxu+Wallpaper+Fedora+11+Desktop+Background.jpg.html

CD automount

I have a small problem withe the auto mount for CDs. It does not seem to be working. I am not sure of the reason.

Installing RPM fusion

See: http://rpmfusion.org/Configuration/.

At least using RPM fusion I can install: mplayer and vlc. I will have to see whether I can also play wmv, mpeg and avis just as I wish. It seems to be the case.

Updating would have been better?

With these things you never know. But I wonder whether I would not have rather updated the distribution instead of installing a new version. I am still not finished.

Thursday, 11 September 2008

Linux Links

Some information can be found using: http://www.tuxfinder.com/ There is a guide about kernel development from Jonathan Corbet, well known linux author and editor at LWN.net : https://ldn.linuxfoundation.org/article/everything-linux-kernel

www.google.com/linux and co

Well I just discovered an interesting thing while looking through old papers the existence of a certain number of URLs for specific google search engine: http://www.google.com/linux http://www.google.com/microsoft Though I would love to get a number of info on which other possibilities there are... Is there a list somewhere.

Wednesday, 27 August 2008

Memory Management Documentation in Linux Kernel

As I was looking at the kernel newbies info. I found this post from Mel Gorman about documentation for the Memory Management under Linux. It contains links to two documents:

Tuesday, 1 July 2008

Lguest - simple virtualization mechanism

From the article at LWN.net, I learnt that there is a small virtualisation framework for linux using an hypervisor: lguest. This virtualization framework allows the testing of drivers. So I should use this to test some kernel changes which might be problematic.

Wednesday, 11 June 2008

Kernel KnowHow: Per CPU Variables

I just read the section on per CPU variables from the Linux Device Drivers book from O'Reilly (TODO link).

The use of per CPU variable helps improving performance by limiting the cache sharing between CPUs. The idea is that each CPU has his own private instance of a given variable.

A variable can be defined using the following macro: DEFINE_PER_CPU(type,name);

It is important to note that a kind of locking mechanism is still needed for the CPU variables in case:

  • the processor would move the process to another cpu
  • the processor would be preemted in the middle of a modification of the variable

Therefore, each instance should be updated using a simple mechanism to lock the variable during the changes. This can be done by using the get_cpu_var() and put_cpu_var() functions, for example:

get_cpu_var(variabletoupdate)++;
put_cpu_var(variabletoupdate);

It is also possible to access the variable values from other processors using: per_cpu(variable, int cpu_id); but a locking mechanism must be implemented for these cases.

Memory allocation for dynamically allocated per-CPU variables is performed using:

void *alloc_percpu(type);
void *__alloc_percpu(size_t size , size_t align);

Deallocation is achieved using: free_percpu().

Accessing a dynamically allocated per-CPU variable is performed by using

per_cpu_ptr(void * per_cpu_var, int cpu_id).
It returns a pointer to the variable content. For example,
int cpu;
cpu= get_cpu();
ptr = per_cpu_ptr(per_cpu_var, cpu);
put_cpu();

Note: the use of the method to free the cpu from the lock.

Tuesday, 3 June 2008

Kernel Index page from LWN.net

As I was looking for some supplementary information on the kernel, I found this page which returns a categorized list of entries of articles in the LWN page.

Monday, 26 May 2008

Kernel Makefile

In this post, I sum up the main Makefile parameters and targets, right now it mainly corresponds to $make help but I might edit this entry to add useful information.

First of all use (if in the directory of the kernel sources)

$ make help
or if you are not in the directory of the kernel sources (then located in )
$ make -C help

This more or less gives the following information.

This outputs a list of possible target and supplementary information:

  • a few variable can be set
      ARCH=um ... for the corresponding architecture V=1 > means verbose build V=2 > gives reason for rebuild of target O=dir > is the output directory of the build including .config file C=1 or C=2 > checking (resp force check) of c sources
  • Documentation
    • make [htmldocs|mandocs|pdfdocs|psdocs|xmldocs] ->>> build the corresponding docs
  • Packages
    • make rpm-pkg > allows to build src and binary rpm packages
    • make binrpm-pkg > allows to build binary rpm packages
    • make deb-pkg > allows to build deb packages
    • make tar-pkg > allows to build uncompressed tarball
    • make targz-pkg > allows to build gzipped compressed tarball
    • make tarbz2-pkg > allows to build bzip2 compressed tarball
  • Cleaning targets:
    • clean - Remove most generated files but keep the config and enough build support to build external modules
    • mrproper - Remove all generated files + config + various backup files
    • distclean - mrproper + remove editor backup and patch files
    Note that it can be useful to use ARCH=... in the cleaning process
  • Kernel Configuration targets:
    • config - Update current config utilising a line-oriented program
    • menuconfig - Update current config utilising a menu based program
    • xconfig - Update current config utilising a QT based front-end
    • gconfig - Update current config utilising a GTK based front-end
    • oldconfig - Update current config utilising a provided .config as base
    • silentoldconfig - Same as oldconfig, but quietly
    • randconfig - New config with random answer to all options
    • defconfig - New config with default answer to all options
    • allmodconfig - New config selecting modules when possible
    • allyesconfig - New config where all options are accepted with yes
    • allnoconfig - New config where all options are answered with no
  • Other useful targets:
    • prepare - Set up for building external modules
    • all - Build all targets marked with [*]
      • * vmlinux - Build the bare kernel
      • * modules - Build all modules
      • modules_install - Install all modules to INSTALL_MOD_PATH (default: /)
    • dir/ - Build all files in dir and below
    • dir/file.[ois] - Build specified target only
    • dir/file.ko - Build module including final link

Note that there are also some little things about tags for editors, but I am not so sure what it really brings.

Linux Standard Base

Yes I like standards... Standards are great... Of course you should not exaggerate it, but yeah standard base are a good thing for linux.

So take a look at the Linux Standard Base ( aka LSB). It is a specification of what rules all linux distribution should respect.

For example, it specifies the executable and linking format: ELF, as well as specifying a number of useful libraries: libc, libm, lipthread, libgcc_s, librt, libcrypt, libpam and libdl. Some util libraries are also specified: libz, libncurses, libutil.

It also specifies a number of commandline command ( see the standard on this subject)

Linux Config Archive

I found an interesting site which is an archive for configuration files from the linux kernel. Unfortunately, it did not work when I tried it. But at least the idea is quite good. I am sure I will try this from time to time.

Sunday, 25 May 2008

init scripts

For a few things I am interested in doing, I wanted to be able to have a small script preparing as soon as I boot up. Perhaps it is more interesting to use atd or cron for this but I wanted to make sure how the initscript system works.

So I prepare a small script in order to start some system tools as soon as the boot process is finished.

For example, a little tool starting a remote process when I first boot which would allow me to use some remote processing facilities, e.g (focused) crawler. This could be also some system starting before/after the httpd daemon is up.

For this I took a look at the /etc/init.d/postgres script.

#!/bin/sh # newscript This is the init script for starting up the newscript
# service
#
# chkconfig: - 99 99
# description: Starts and stops the newscript that handles \
# all newscript requests.
# processname: mynewscript
# pidfile: /var/run/mynewscript.pid

# Version 0.1 myname
# Added code to start newscript

Note the use of chkconfig: - 99 99 .

This should be adapted with more useful priorities, basically 99 means that the initscript is started as one of the last scripts. Taking a look at $man chkconfig should prove useful.

The new script stores the pid of the newscript application in /var/run/mynewscript.pid

Note that it also stores things in /var/lock/subsys/

Maintainers File in Kernel and SCM tools for the kernel

I have just had a look at the maintener file in the linux kernel Tree.

I have noticed that there are a number of orphaned project. The question is whether any of these orphaned project really needs to be taken care of.

Another interesting thing was to learn about the different scm and patching tools used in kernel development: git, quilt and hg.

Here is an interesting overview of the reason for the development of git and quilt.

I really start to like the patch approach, and the article linked above gives a good idea of the reasons to use this approach. I should try to summarise in a future post the advantages and disadvantages of the different source code management approaches.

Kernel Stuff

I did some little things with kernel programming (or more compiling) these days.

Part of the things I did were compiling kernel because I wanted to try UML (user mode linux).

So that's what I did:

  • Download the kernel configs from: http://uml.nagafix.co.uk/kernels/.
  • Download kernels from kernel.org.
  • untar the kernels to some directory
  • cd into the main directory of the kernel
  • copy the config of the kernels into main directory as .config file
  • $ make ARCH=um oldconfig
  • answered the necessary questions as good as I could
  • $ make ARCH=um

    At that point some errors appeared, so I tried to correct them.

  • to help me in the debugging process I used $ make V=1 ARCH=um
  • when I had some things that did not work well I used the gcc output to call it right away. For example, sometimes the architecture files would not be found right so I used -Iinclude sometimes a precompiler marks was not set correctly so I used -D__someprecompilermarks__. At some point I removed some problematic definition by using this together with a #ifndef in the header file. $ gcc ..... -Iinclude -D__someprecompilermarks__ ...
  • then I also downloaded a few kernel repositories using git, though I still need to perfect this.
  • I read (or skipped/read) quite a few Documentation files from the kernel or from the internet.
  • I familiarised myself with the git web interface, this together with having a kernel RSS feed in my thunderbird.

And all this in one day and half together with other things.

Sunday, 18 May 2008

Naemi and RPM

As I tend to forget pretty often the syntax of the query format of rpm, I had a short look at the internet. I found this web page which explains some aspects of the use of the rpm -q command.

I discovered a command which I did not know and which might turn out quite useful: namei. It follows the symbolic links until their end point can be found.

Example:

$> namei /usr/java/latest/
f: /usr/java/latest/
d /
d usr
d java
l latest -> /usr/java/jdk1.6.0_03
d /
d usr
d java
d jdk1.6.0_03

Thursday, 17 April 2008

Ubuntu Printer Canon IP1700 Pixma using IP1800 driver

I have been trying to install using packages the drivers for the printer of a friend: a Canon IP 1700 PIXMA- It proved more complex than once thought, so she has been without a printer for a while. I had tried as suggested in a few web pages to use the driver of the IP 2200 using alien. I finally managed after I first removed all the drivers I had installed for the IP2200 and then reinstalling IP1800 drivers for debian from some web page. So at least, she has a printer right now. Edit: Note that packages for the version 7.10 of ubuntu can be downloaded there. It seems that for the Hardy Heron (aka Ubuntu 8.04) some other drivers exist, see this page.