Thursday, 28 July 2011

Get Old fb Chat Back

Today, While Surfing Internet, I found a very simple way which will help you to get the old Facebook chat. This is the simplest way to get rid of new chat sidebar which only shows you the list of people you chat with the most. Also, there is no way to see who’s online right now in the new Facebook Chat. You’ll find many ways to get rid of new sidebar, most of them quite nerdy, but I don’t think you’ll find any working solution easier than this one.

You just need to install a script in your browser. The script is named as ‘Facebook-Chat-enhancer’ that will solve the problem. Once the script installed, you will be able to see all your friends who are online at the moment on chat and also there is a scroll bar in the sidebar which you can scroll up and down to see all your online friends, just like the good-old times. No more offline friends will be shown on chat.

Installing the script in Google Chrome
Installing ‘Facebook-Chat-enhancer’ script is as easy as installing chrome extensions. Go to this page and just click on the ‘Install’ button and restart your browser after installing.
There is another plugin, I haven't checked  it but Hope it would work Just fine . HERE .

Installing the script in Mozilla Firefox
In Firefox, you need to install Grease Monkey Add-on before installing the ‘Facebook-Chat-enhancer’ script. Install Grease Monkey Add-on here and when done, install this script to get back old Facebook chat. Restart your browser and you’re done !!!!!!

Script for Opera, Chrome and Firefox
In the Chrome Extension there is another one for the old facebook chat  HERE , It does the same thing as the previous one. However there is one flaw that it can access your data on from 

Thank me, if it works  :P
And please do share this post on your Facebook wall icon smile Install this Script And Get Old Facebook Chat Back .

Do Comment If it doesn't work for you  icon smile Install this Script And Get Old Facebook Chat Back

Tuesday, 26 July 2011

Startup Optimisation with Bootcharts (Ubuntu kde and Gnome)

The Sok project looks far from completion, I am certain that the project wont' get completed or wouldn't achieve results that I thought it would achieve, before I started the Project. Clearly I didn't have much Idea or thought before starting the project, and my mentor too was clueless at some points, like the part where I had to find the time spent in each of the starting programs and modules. He had suggested using a function like clock(), in each of the classes and then subsequently finding the difference between each script. Something line clock(arg1) - clock(arg2). Clearly it is a Herculean task in magnitude as well as complexity. So I have resorted back to using the good old bootchart. Well its true that it wont give me a very clear time difference, but atleast it would give me a rough idea. So I installed the bootchart application. From :-
Apparently there is a newer version

Sadly I didn't find any difference in their working. For anyone still wondering what is a Bootchart, a Bootchart is a tool for performance analysis and visualization of the GNU/Linux boot process. Resource utilization and process information are collected during the boot process and are later rendered in a PNG, SVG or EPS encoded chart. Bootchart provides a shell script to be run by the kernel in the init phase. The script will run in background and collect process information, CPU statistics and disk usage statistics from the /proc file system. The performance data are stored in memory and are written to disk once the boot process completes.

Obviously you can't optimise or reduce the time spent by the events unless you know where the time is spent, hence bootchart gives you some idea. On having downloaded the bootchart files from either of the above sources. Browse to the extracted directories and nstall them by :-

---> make
---> sudo make install

Or a sudo Command like :-

aaditya@ubuntu:~$ sudo apt-get install bootchart
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following packages were automatically installed and are no longer required:
  gcj-jre gcj-4.4-jre libgcj10-awt
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
The following NEW packages will be installed:
The following packages will be upgraded:
1 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 34.2kB of archives.
After this operation, 90.1kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 maverick/universe bootchart i386 0.90.2-7 [12.5kB]
Get:2 maverick/universe pybootchartgui i386 0+r141-0ubuntu2 [21.7kB]

Having downloaded and installed the files, you just need to reboot your system. When you restart your system next time, browse to /var/log/bootchart and you will find a nice png image showing your boot chart lying there.

Here is my boot chart for my Gnome Desktop system

title = Boot chart for ubuntu (Tue Jul 26 17:53:32 IST 2011)
system.uname = Linux 2.6.35-30-generic #54-Ubuntu SMP Tue Jun 7 18:40:23 UTC 2011 i686
system.release = Ubuntu 10.10
system.cpu = model name    : Intel(R) Core(TM) i3 CPU       M 370  @ 2.40GHz
model name    : Intel(R) Core(TM) i3 CPU       M 370  @ 2.40GHz
model name    : Intel(R) Core(TM) i3 CPU       M 370  @ 2.40GHz
model name    : Intel(R) Core(TM) i3 CPU       M 370  @ 2.40GHz (4)
system.kernel.options = BOOT_IMAGE=/boot/vmlinuz-2.6.35-30-generic root=UUID=C010D42210D42168 loop=/ubuntu/disks/root.disk ro i8042.reset i8042.nomux i8042.nopnp i8042.noloop quiet splash

You will have to download/save the above image and zoom it to accurately view the events.

Here is my boot chart for my kde Desktop system 

You will have to download/save the image and zoom it to accurately view the events.

I guess this is the all important bootchart, and a lot of the future progress will depend on the findings from this bootchart. The upcoming time and may be a future post will be derived from my findings from this post.

Friday, 22 July 2011

Best Linux Quotes & Jokes

It seems most of the best (funny), Linux jokes are Linus Torvalds jokes, here are  the best of the Lot :-

We all know Linux is great... it does infinite loops in 5 seconds.
- Linus Torvalds about the superiority of Linux on the Amsterdam Linux Symposium

"... being a Linux user is sort of like living in a house inhabited by a large family of carpenters and architects. Every morning when you wake up, the house is a little different. Maybe there is a new turret, or some walls have moved. Or perhaps someone has temporarily removed the floor under your bed." - Unix for Dummies, 2nd Edition (Found in the .sig of Rob Riggs)

`When you say "I wrote a program that crashed Windows", people just stare at you blankly and say "Hey, I got those with the system, *for free*".' (By Linus Torvalds)

“See, you not only have to be a good coder to create a system like Linux, you have to be a sneaky bastard too.” (By Linus Torvalds)

"All operating systems sucks, but Linux just sucks less" - Linus Torvalds

By golly, I'm beginning to think Linux really is the best thing since sliced bread. -- Vance Petree, Virginia Power

Computers are like air conditioners - they stop working properly when you open Windows.

"Linux is user friendly, it's just picky about its friends" 

Sunday, 17 July 2011

Adding Systemd to Gnome/kde

 A still of Kde Desktop

Haven't been doing a lot of work of-late, sighhh... out of ideas really !!!!

I had earlier assumed, using systemd/launchd as an external dependency, or copying its working across all startup scripts and application, I came across an interesting discussion on the gnome mailing list, but my mentor thinks that, it would not be viable/ unfeasible. So that, idea is pretty much ruled out. Anyone genuinely interested in following the topic can read this interesting conversation by Lennart, the creator/maintainer of systemd, here :-

My mentor feels that it would be better to augment and improve the existing kdeinit, kded, etc. scripts. Currently kdeinit and kded call and start other scripts and applications by serialisation, if we parallise, more and more events, it would reduce the startup time, we would have to let them do stuff asynchronously and let the modules report back when they're finished setting up stuff. On my part, I would have to edit most if not all the scripts. On my part, I have been asked to have a look at what takes the longest during launch. Maybe I should try adding just a lot of debug output to kdeinit that shows how long each operation takes. To start with, I should just call utime() between the various methods called in kdeinit, and print out the difference between them. I should look at the timings of all the scripts and how much script is spent on each process, and then, we parallise some particular scripts that take most time.

Using the debug* files here: 
 well, first I should probably find out what uses time during launch, if kded starts instantly, it isn't worth wasting time on making it launch stuff in parallel


Sighhh... So Much to DO ....       o_O

Wednesday, 6 July 2011

Adding Meta Tags to your Blogger Blog

What are meta tags and why should I add meta tags to my Blog ?

Meta tags are the magic words that tells the search engine (google/ yahoo/ bing...) bots about, the keywords and tags in your content, it tells the search engine what your blog is about, so that your blog get emphasised on those keywords. Adding meta tags is an important factor in Organic search engine Optimisation (SEO). Meta tags allows search engines to index your web pages more accurately. In other words, Meta tags communicate with the search engines and tells more information about your site and make your webpages index correctly and accurately.

For Example :-
As you can see in the above screenshot, " hacks, help, tips and tricks for open source learners and hackers " is the meta tags description which I added to my blog.

Unfortunately Blogger doesn't have a add Tags feature so, this is how you can add Tags to your blog on blogger :-

(1). After signing in to your blog, go to Design.
(2). Choose edit HTML option.
And now check for these lines in codes :-

<b:include data='blog' name='all-head-content'/>

Now add the following code to your html code, below the above lines :-
 <meta content='DESCRIPTION HERE' name='description'/>
<meta content='KEYWORDS HERE' name='keywords'/>
<meta content='AUTHOR NAME HERE' name='author'/>

<meta name="keywords" content= " humour, pictures, jokes, Template, Competition, tutorial, hacks, tips, tricks" />
<meta name="description" content="A cool Blog on college Life and Stuff, Quizzing, Dramatics, football and stuff" />
<meta name="author" content="Aaditya Chauhan" />
<meta name="ROBOTS" content="ALL" />
Replace the red portion with your tags and description and the green part with your name.

That's it ! You have successfully added the metatags to your blogger blog. You can check out your tags description, when you see your blog on Google.

Sunday, 3 July 2011

Switching from Gnome to KDE

I recently, switched from Gnome to KDE, for the sake of my project and even though I was reluctant in the beginning, I would now have to admit, that it was totally worth it .KDE is strikingly different from gnome, and I find it very similar to Microsoft Windows, in some ways. To start-with, I found Kdm pretty boring so may be the recent talk in the kde community to replace Kdm with Lightdm is well placed. But the rest of the desktop environment was nice and refreshing. Here is a simple procedure to switch from gnome to kde.

Just follow the simple steps :-

(1). Go to System ---> Administration ---> Synaptic Package Manager.
(2). Search for kubuntu Desktop.

 (3). Select the package and install the package.

It would be a, not so huge 117 MB to Download. So just sit back and relax, it took 34 minutes to download on my system.

(4). Now it would start installing, it would again take some time, to install.

Some time in between the download it would ask you, if you would like to keep gdm or switch to kdm, I would say, you should stick with gdm. And you are through to use it.

So enjoy, using kde and don't forget to give your valuable feedback at  !!!!!!

Technorati Tags: , , ,

Sunday, 26 June 2011

JPEG Compression Algorithm

Who knew, that the file format we daily use to store our images, is not just a file format, but also a state-of-the-art compression technique. It has been a common day-to-day experience for every multimedia enthusiast, that the same png image when saved in the jpeg format, results in smaller size, i.e. the image gets compressed. The degree of compression is adjusted, allowing a selectable trade-off between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. The term "JPEG" is an acronym for the Joint Photographic Experts Group which created the standard, it is the most common format for storing and transmitting photographic images on the World Wide Web.

The image compression techniques is of 2 types :-
(1). Lossy
(2). Loss-less.

It is the Lossy technique that is more preferred, as it gives better compression ratios, for very little loss in clarity.
The compression is often achieved by leaving out non-important data parameters, and focussing only on important parameters, like :-

(1). color-space transformation matrix :-
The image is converted into a RBG colored matrix and also a gamma channel determing the brightness of the respective color. This kind of a color space conversion creates greater compression, without any perceptual change in image quality.The compression is more efficient because the brightness information, which is more important to the eventual perceptual quality of the image, is confined to a single channel. This more closely corresponds to the perception of color in the human visual system. The color transformation also improves compression by statistical de-correlation.

The various steps involved in the conversion of an image into jpeg format, is :-

(2). Down-sampling
(3). Block-Splitting
After Sub-sampling each block is split into 8*8 blocks.
(4). Discrete cosine transform
Next, each 8×8 block of each component (Y, Cb, Cr) is converted to a frequency-domain representation, using a normalised, two-dimensional type-II discrete cosine transform (DCT). Before computing the DCT of the 8×8 block, its values are shifted from a positive range to one centred around zero. For an 8-bit image, each entry in the original block falls in the range [0,255]. The mid-point of the range (in this case, the value 128) is subtracted from each entry to produce a data range that is centred around zero, so that the modified range is [ − 128,127]. This step reduces the dynamic range requirements in the DCT processing stage that follows. (Aside from the difference in dynamic range within the DCT stage, this step is mathematically equivalent to subtracting 1024 from the DC coefficient after performing the transform – which may be a better way to perform the operation on some architectures since it involves performing only one subtraction rather than 64 of them.)

(5). Quantisation
The human eye is good at seeing small differences in brightness over a relatively large area, but not so good at distinguishing the exact strength of a high frequency brightness variation. This allows one to greatly reduce the amount of information in the high frequency components. This is done by simply dividing each component in the frequency domain by a constant for that component, and then rounding to the nearest integer. This rounding operation is the only lossy operation in the whole process if the DCT computation is performed with sufficiently high precision. As a result of this, it is typically the case that many of the higher frequency components are rounded to zero, and many of the rest become small positive or negative numbers, which take many fewer bits to represent.

(6). Entropy encoding :-
It is basically a step where we apply Huffman coding/algorithm on the redundant information bit.

sighhhh.......... you would have never thought that these many steps take place, just while saving the image.

To reconstruct the image from these mathematical Data, an entirely opposite data transformation takes place, i.e. the encoded Data passes through the decoder, the inverse discrete fourier transform takes place, and so on and so-forth.

Technorati Tags: , ,

Thursday, 23 June 2011

Decoding GSOC

I applied for a position in this year's Google Summer of Code. Though frankly, prior to this year's GSOC, I had very little experience of Open source Software development, still I thought may be I should give it a shot. GSOC is a annual event that takes place from April to August, in which students code for various Open Source Organisations and Google pays them for it. If you complete your project, you get $5000, certificate and Google Goodies. The Theme says, flip bits not burgers. The whole event is based on getting more students familiar with Open Source.

I applied for Abiword this year, the very first organisation that appears in the list of participating organisations. AbiWord is a free word processing program similar to Microsoft® Word. It is suitable for a wide variety of word processing tasks. What makes Abiword special is its cross-platform characteristics. It runs on Windows, Linux and OSX too. Sadly I didn't get selected, but a Chinese friend of mine, 26 year old Phd student from Chinese Academy of Sciences, ChenXiajian got selected, who will complete the project, I applied for, Hyphenation Support in Abiword. Meanwhile, I made a lot of good friends in the Abiword Community. Even though I didn't get selected, but my friend Divyanshu Bandil, did get selected, in Ascend, where he will make a parser and compiler for ascend in python, due to its BSD licensing freedom. I also requested him to share his valuable experience, on how to get selected in GSOC. And even though he is very busy, trying to meet Deadlines, but he did spare some time, and shares his account.

By Divyanshu Bandil

After getting my proposal selected for GSoC 2011, my friend asked me to share the experience on his blog. So here are some tips for future aspirants.

1. Ignore the hoopla - The first impression you get about the program is that you have to be too skilful and a 'supercoder' for getting selected. You tend to think that the other guy who got selected is a magician in coding and carries all the algos at the tips of his hand. Though it is necessary to have some amount of experience in programming (hey! you already have that or why else would you be applying then?) and it helps if you have knowledge of basic data structures and some algorithms. But the fact is GSoC is more of doing a project and learning on the way. So believe me if you have the will to learn and work hard than nothing can stop you from getting selected.

2. Get your act together - I hope I have got you motivated enough to work forward. First of all ask some basic questions from yourself. What type of organisation/software you would like to work for? Which programming language you are comfortable to work with? Which is the the platform you would to interested to develop for?etc. You need not be too decisive but its good to have a general idea about the nature of the project you would be interested to work on.

3. Bond with the org - Based on the above answers choose two or maximum three organisations you would be interested to work for. Join their mailing lists and IRC. Chat with the members of the community. Discuss about the project idea you would like to work on. Try to contribute to the code by solving a bug or maybe adding a small feature. This improves your chances of getting selected by helping you to showcase your capability. The idea is to know the community and show that you will be able to work for them.

4. Writing the proposal - Once you have discussed about the project and are confident enough in its implemention, next step is to write your proposal. Most organisations provide a template for writing a proposal. However there are some things you should always mention in your proposal. First of all write a small abstract of the project. Provide some implementation details of the idea. Also provide a tentative schedule you will follow. Add some bits about your prior experience in programming. Its good to provide links to any project you have worked previously. Also submit your proposal early which leaves time enough to get it reviewed. Subsequently, you will be able to remove any loopholes in your implementation and make your proposal better.

After submitting your final proposal, just keep your fingers crossed!

Thanks a ton Divyanshu, for sharing your experience, with others. I hope it would be very useful to the future aspirants, in-case you have a question just fire away.

Technorati Tags: , , , , , ,

Wednesday, 22 June 2011

UnderStanding Systemd Part -II

This Post is from my understanding of the original post by leannart poettering on systemd :- 

At this point may be we should look back and analyse how the init scripts work, and have worked over the past so many generations, and may be draw a parallel with kde's style of booting up.
SystemV init ----> Upstart ----> Systemd

SystemV's Approach :-

Finding the dependencies between services and starting them according to a topological sort is inherently serial. You could start daemons that don’t depend on one another at the same time, but if two daemons both depend on dbus, dbus must start first. This creates inevitable bottlenecks which slow down the boot up.

Upstart's Approach :-

“All jobs waiting on a particular event are normally started at the same time when that event occurs”. This means that all services that depend on dbus are started immediately after dbus has finished starting. Although this certainly improves boot times by a fair margin (Ubuntu boots very quickly nowadays), the bottleneck is still there.

Systemd/launchd's Approach :-

What Happens is that for every daemon the resources gets allocated right at the start,(resources here implies, the sockets and the ports), while keeping the mapping from resources to the daemons, it is pretended that everything has been started, while loading all the daemons simultaneously. If a daemon requires a socket that belongs to a daemon that has not finished starting yet, the requests are buffered and ordered by the kernel and the requesting daemon blocks, waiting for an answer. Finally, when a daemon finishes starting up, it gets all the requests buffered by the kernel without any fuss, as if it had just received them.

I am not too sure, but from the look of things, it seems that kde's model of start-up is pretty much similar to either Upstart's or SystemV's approach, starting one process after another serially. Analysing my system boot charts, it looks more close to Upstart's approach. Here the order of execution is :-

KDE Display Manager ----> startkde script ----> (1). kdeinit (starts various services)  ---->  klauncher + kded + kcminit
                                                                              (2). ksmserver

Image Showing System Processes of my system with init script being high-lighted .
To achieve something close to systemd's model, the focus area should be the tie-up between kdeinit script and its subsequent sub-processes, and how we can parallise the process calls instead of the current serial inter-connections, between the processes. Apparently this approach should be highly feasible for an enviro like kde/gnome. But editing and implementing all these scripts is still going to be a Herculean task.

Friday, 17 June 2011

Understanding systemd part-I

This Post is from my understanding of the original post by leannart poettering on systemd :- 

From the look of things systemd just looks like a fancy init system.

May be at this point, I must get more familiar with init system, to understand more about systemd. init has PID 1, and so obviously it gets started first by the kernel, before all other processes. It is the parent process to all other child processes. Apart from this very important function, init script performs the central task of bringing up and maintaining user-space during boot. init script is much faster than its venerable predecessor sysvinit. For a faster system, 2 main things are required.
(1). Start less ---> Starting less means starting fewer services or deferring the starting of services until they are actually needed. For example a printing service may not be required immediately after having system on, whereas, there are some services where we know that they will be required sooner or later (syslog, D-Bus system bus, etc.). There would be no need to start many services, until directly called by the user, or its API is required by some-other service.
(2). Start More in parallel ---> Starting more in parallel means that if we have to run something, we should not serialise its start-up (as sysvinit does), but run it all at the same time, so that the available CPU and disk IO bandwidth is maxed out, and hence the overall start-up time minimised.

Lets see how programs start, with an example :-

--------------X--------------------------------------------------------- DBus service
---------------|------------------------------------------------- --X--- HAL service
--------------X---------------------------------------------------X---  syslog
               |                                                    |
               |                                                    |
Avahi(dependent on dbus and syslog)---------->livirtd (dependent on HAL and syslog) & also on Avahi

So for livirtd to start, it would have to wait for HAL and syslog to start, and additionally wait for Avahi to start too, hence livirtd can't start until all these services have started, and hence the incumbent delay.

Parallelizing Socket Services :-
In-order to get rid of these synchronising and paralleling delays, we have to understand what is required by one process from another. Usually that is an AF_UNIX socket in the file-system, but it could be AF_INET[6], too. For example, clients of D-Bus wait that /var/run/dbus/system_bus_socket can be connected to, clients of syslog wait for /dev/log, clients of CUPS wait for /var/run/cups/cups.sock and NFS mounts wait for /var/run/rpcbind.sock and the portmapper IP port, and so on. Now if we can make this socket appear before the entire daemon gets executed, and link it with the service, we can significantly reduce boot time and start more processes in parallel. We can create the listening sockets before we actually start the daemon, and then just pass the socket during exec() to it. That way, we can create all sockets for all daemons in one step in the init system, and then in a second step run all daemons at once. If a service needs another, and it is not fully started up, that's completely OK: what will happen is that the connection is queued in the providing service and the client will potentially block on that single request. But only that one client will block and only on that one request. Also, dependencies between services will no longer necessarily have to be configured to allow proper parallelized start-up: if we start all sockets at once and a service needs another it can be sure that it can connect to its socket.

## not clearly understood this part ##

Basically, the kernel socket buffers help us to maximise parallelization, and the ordering and synchronisation is done by the kernel, without any further management from userspace! And if all the sockets are available before the daemons actually start-up, dependency management also becomes redundant (or at least secondary): if a daemon needs another daemon, it will just connect to it. If the other daemon is already started, this will immediately succeed. If it isn't started but in the process of being started, the first daemon will not even have to wait for it, unless it issues a synchronous request. And even if the other daemon is not running at all, it can be auto-spawned. From the first daemon's perspective there is no difference, hence dependency management becomes mostly unnecessary or at least secondary, and all of this in optimal parallelization and optionally with on-demand loading. On top of this, this is also more robust, because the sockets stay available regardless whether the actual daemons might temporarily become unavailable (maybe due to crashing). In fact, you can easily write a daemon with this that can run, and exit (or crash), and run again and exit again (and so on), and all of that without the clients noticing or loosing any request.

Parallelizing Bus Services
Modern daemons on Linux tend to provide services via D-Bus instead of plain AF_UNIX sockets. Now, the question is, for those services, can we apply the same parallelizing boot logic as for traditional socket services? Yes, we can, D-Bus already has all the right hooks for it: using bus activation a service can be started the first time it is accessed. Bus activation also gives us the minimal per-request synchronisation we need for starting up the providers and the consumers of D-Bus services at the same time: if we want to start Avahi at the same time as CUPS (side note: CUPS uses Avahi to browse for mDNS/DNS-SD printers), then we can simply run them at the same time, and if CUPS is quicker than Avahi via the bus activation logic we can get D-Bus to queue the request until Avahi manages to establish its service name.

Apart from these services, filesystem jobs also have to be parallised, but I am not going to read much into it, as the main focus of my project, should lie in maintaining Bus and Socket services.
Technorati Tags: , , , ,

Thursday, 16 June 2011

UnderStanding KDE Launch Sequence

Its time to begin some serious work, I am already behind schedule, I will have to choose an area to work-on and then move-on to doing some instrumentation/benchmarking to see where the most time is spent, and where the efforts have to be focussed. I have decide to take a look at complete launch sequence right from KDM to the Plasma Shell, with Akonadi Application. My modest assumption, at this stage is that the bulk of the time for all the applications is spent on the transfer from one process to another, from startkde/kdeinit/kded etc. and their tie-up together, maybe we could unify this a bit more. Maybe also add some timing instrumentation/debug output, to see where time is spent.

Having a more detailed look on the entire start-up sequence :-

KDE Display Manager 
   startkde script (executes 2 main functions) ----> 
(1). kdeinit (starts various services)
   dcopserver + klauncher + kded + kcminit
(2). ksmserver

Process 1:-
The user gets authenticated at start-up by entering the user-name and password at KDE Display Manager (KDM). KDM then executes startkde script, this scripts performs two vital functions, and calls the following scripts :-

LD_BIND_NOW=true kdeinit +kcminit +knotify
Starts the kdeinit master process, which in-turn starts all other KDE processes. The arguments behind kdeinit are the names of additional services to be started. The + indicates that kdeinit needs to wait till the service has finished.
                ## Haven't clearly understood the script ##

kwrapper ksmserver $KDEWM 

Starts the kde's session manager. On startup the session manager starts auto-start applications and it restores applications from the previous session. The session manager determines the lifetime of the session. When this process exits, the user is logged out.

Process 2:-
kdeinit master process begins after the startkde script gets executed. kdeinit can start normal binary program files as well as kdeinit loadable modules (KLMs). KLMs work just like binary program files but can be started more efficiently.

for example :-
> ps aux
waba     23184  0.2  2.1 23428 11124 ?       S    21:41   0:00 kdeinit: Running...
waba     23187  0.1  2.1 23200 11124 ?       S    21:41   0:00 kdeinit: dcopserver --nosid
waba     23189  0.2  2.4 25136 12496 ?       S    21:41   0:00 kdeinit: klauncher
waba     23192  0.7  2.8 25596 14772 ?       S    21:41   0:00 kdeinit: kded
waba     23203  0.8  3.4 31516 17892 ?       S    21:41   0:00 kdeinit: knotify

kdeinit in the first line: Running... indicates the master kdeinit process. The other processes listed are programs started as KLMs.

Process 3:-
BackGround/Subprocesses :-

1. dcopserver 
# confusion? - is it any different from the dbus daemon that starts on system booting ??? #
 The doci says "dcopserver is a deamon which provides inter-process communication (DCOP) facilities to all KDE applications." Which is pretty similar to dbus-daemon, which provides inter-process communication (IPC) facilities to all KDE applications as well as several other system components, such as HAL, network manager, power manager and various non-KDE desktop applications.

2. klauncher
klauncher is a deamon which is responsible for service activation within KDE. It operates in close connection with the kdeinit master process to start new processes. KDE applications communicate with klauncher over dcop in order to start new applications or services.

3. kded
kded is a generic KDE daemon. It has the ability to load various service modules and run these in the background. The Service Manager in the Control Center can be used to monitor the status of the service modules and to disable certain services.

4. kcminit
kcminit executes initialisation services during startup. Initialisation services are specified in the .desktop files of applications or services via the X-KDE-Init line. Initialisation services are typically used for initialisating hardware based on user specified settings. kcminit --list can be used to show all initialisation services and kcminit can be used to execute a single service explicity. This can be useful when investigating start-up problems.
## More Understanding required ##

Process 4:-
ksmserver is KDE's session manager. On startup the session manager starts auto-start applications and it restores applications from the previous session.

Whether to auto-start an application can be conditional upon some configuration entry determined by the X-KDE-autostart-condition entry in the .desktop file. The KDE session manager also restores one of the previous sessions. A session contains of a collection of applications as well as application-specific information that reflects the state of the applications at the time the session was saved. Sessions are stored in the ksmserverrc configuration file and contains references to application-specific state information. The application specific state infomation is saved in $KDEHOME/share/config/session.

For example, if ksmserverrc contains:


Then the application specific state information for kwin and konsole can be found in





The state information of kwin contains the location of the application windows of all the other applications in the session.

At this point the order of execution with respect to ksmserver is not very well understood by me, like what happens immediately after the startkde script, if the ksmserver script gets executed alongside, kdeinit, then the values returned by ksmserver would have to communicate in some-way with the klauncher and kded.
Technorati Tags: , , , ,

Sunday, 29 May 2011

Season Of KDE

Having not got selected for this year's Google Summer Of Code, I applied for Season of KDE. Season of KDE (SoK) was set up in 2006 to provide some of the benefits of Google Summer of Code to those students whose projects did not get selected. Season of KDE provides students with experienced mentors and a well defined project, just like Google Summer of Code. SoK does not provide payment to students. Season of KDE doesn't provide the same benefits as Google Summer of Code, but it offers valuable mentoring, along with a vibrant and friendly environment, its more about Passion and of-course a cool T-Shirt. SoK allows us to work on a Cutting edge Project, and helps to take first steps developing KDE software and becoming worthy members of the KDE community. Each SoK student works on their chosen project with a mentor from KDE with experience in that area to help and guide them.

I too applied for Season of KDE. The application procedure was very straight-forward. Soon after getting the --> "sorry we couldn't select you, mail" from Google. I received a mail from a small Google Summer Of Dissapointment community. And came to know about Season Of KDE. Even students have to eat and so SoK participants often have other jobs and can only work on their projects part time. As a result, SoK projects may have smaller scope than Google Summer of Code projects or happen over a longer period. KDE benefits from new additions to our software and our community, and students get a SoK t-shirt, a certificate, some Google goodies and a great experience. SoK can also be a springboard to future Google Summer of Code success, with several past SoK participants going on to secure Google Summer of Code acceptance. Equally, SoK has provided opportunities for students to continue a Google Summer of Code project from previous years.

Having filled out the initial details on Lydia's Blog.
I received a mail next day, about my Project Details, I had earlier very vaguely suggested a project on Kde Speed Optimisation.

Project : Speed Optimisation of KDE start-up time.

I was Bombarded with a lot of well directed suggestions like, :-

: A possibility would be to help the Platform modulation effort to cut down dependencies.

: Another could be to reduce the number of dependencies that are loaded during startup. That is to convert some   of the static libraries and see if we can load them Dynamically, that may have slight performance issues, but that would greatly reduce start-up time.

  And finally I hope to implement the project as Tom Gundersen suggested :-

: Having a look at how systemd has improved startup speeds of system services, and try to do something similar to kdestartup. The idea is to allow sockets/dbus activation to synchronise daemon startup, rather than explicitly calling one daemon after another is up.

Idealy something similar should be possible (and much easier) for KDE apps/daemons as most desktop things are already able to do dbus activation. The idea would be to start all apps/daemons/services simultaneously as soon as the dbus session bus is running, and if one app needs a service it will block in the call to dbus waiting for it to start. As much as possible everything should be in the first autostart phase.

: The way to go here would to use bootchart and a profiler to find out where most of the time is spent, and then making that code faster. systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic.

It looks like these are going to be a great summer, I am so very excited. I hope my project would get selected.

Technorati Tags: , , ,

Friday, 13 May 2011

FireFox Tricks, Linux FunFacts & some interesting Doodles

Haven't posted for a while now, for a variety of reasons.. ranging from frustating blogger maintainence issues to depressing exams, practicals and vivas... So, I am back again presenting to you, a few fire-fox tricks that I came across while googling .... they look pretty cool, so just try them & dont forget to comment with your favourite firefox trick....

Type the following addresses in the address-bar, and see the results :-

IT SHOWS Dancing Firefox.

IT Opens another Firefox inside a tab in the the existing Firefox window.

IT Opens the Options dialog box inside the Firefox tab.

IT Opens the “Book Marks Manager” inside a tab in the Firefox window.

IT Opens the History Panel in the Firefox tab.

IT Opens the Extensions window in the current tab.

IT Opens the “cookies window” inside a tab in the Firefox window.

IT Opens the “Clear Private Data” window inside the current tab.

IT Opens the “About Firefox” Dialog box inside the tab.

IT A scrolling list of names. The one’s who we must thank for creating Firefox

linux funfacts :-
(1). Linus' (The writter of the linux kernel) favorite programming editor is                   Microsoft Notepad.

(2). Linux® is commonly confused with LINUXOS,released by Linux Incorporated.

(3). Steve Ballmer (Microsoft) has called Linux a cancer on mankind.

(4). Linux® is a compound abbreviation for the full name: Linu Christ. In fact,
      Linu Christ is believed to have run the Church of UNIX. Critics such as Pat
      Robertson contend that the X crosses out Christ.

(5). The bit-system was originally invented to make Linux®'s kernel-versions
       countable. Right now there's 64bit, since there are 2^64 different kernels.
       And the saga isn't even close to coming to an end.

(6). The original kernel source was written down in four notebooks.

A few interesting Google Doodles, I came across from Google India and Google World :-



With the last one being, Google's first ever Doodle.

Saturday, 23 April 2011

Hack Your Mozilla FireFox and Browser Wars Reloaded.

Mozilla FireFox is my preferred Open Source Browser, so here I present a minor "tweaking" , which can make Firefox perform up to 40% faster for page transfers. With just a few clicks and some typing, you can experience faster browsing and surfing in Firefox.
Here's How its Done :-

1. Open Firefox and in the address bar, type 'about:config'.

2. Click on the button: 'I,l be careful, I promise'.

3. Use the search bar located on the page to look for 'network.http.pipelining' and double click on it to set its value to 'True'.

4. Create a new Boolean value named 'network.http.pipelining.firstrequest' and set that to 'True' as well.

5. Find 'network.http.pipelining.maxrequests' , double click and change its value to 8.

6. Look for 'network.http.proxy.pipelining' and set it to 'True'.

7. Create two new integers named 'nglayout.initialpaint.delay' and 'content.notify.interval' ; set them to '0'.

8. Restart your browser and Thank me, for the difference.

Browser Wars gets Ugly :-
Microsoft says IE9 is "the world's fastest browser", but Firefox developer Mozilla claims IE9 doesn't even qualify as modern. As Mozilla's Firefox4 and Internet Explorer move closer to release, the browser makers are sparring over each other's HTML5 capabilities, lobbing insults and contradictory test results. After Microsoft claimed IE9 achieves 99% compatibly with HTML5, compared to Firefox's 50%, Mozilla Corp. technology "evangelist" Paul Rouget fired back with a blog post titled "Is IE9 a modern browser? NO.

To present a clearer picture of the Browser Wars, I show the statistics of the people who have visited my Blog, you can view it and come to your own conclusion 

As you can clearly see that Google Chrome is the Leader in the Browser Wars, followed closely by Mozilla FireFox and Internet explorer is a Distant 3rd.
                                      So the winner in this Browser Wars is :-   GOOGLE CHROME                                                                    Followed Closely By :-   MOZILLA FIREFOX

    Wednesday, 20 April 2011


    When it comes to choosing a Web browser today, we're spoiled for choice. Major new release Internet Explorer 9 and Firefox 4 have brought these two big name browsers to near parity with upstart Google Chrome, which though a relatively new entrant into the browser market has taken the browser industry by storm.
    The current crop of surfing software all include plenty of speed, minimised interfaces for a better look at that site you're browsing, and support for the emerging HTML5 standard markup language. Each brings a unique twist, though. The new browser from Microsoft Internet Explorer 9, adds hardware acceleration for graphics-intensive sites and arguably the best privacy tool to prevent tracking of your Web activities by marketing sites. Firefox offers a Panorama view of your tabs and a refreshed version of what's still the most powerful set of customizations, along with the ability to sync bookmarks, history, settings, and more. 

    Google Chrome 10 : 
    Chrome Instant means your Web page is ready to read before you finish typing the address. This, its speed, minimalist design, and advanced support for HTML5 have deservedly been attracting more and more users to the browser. The latest version adds an improved settings interface, and even more speed and security.

    Internet Explorer 9 :
    Microsoft's new browser is faster, trimmer, more compliant with HTML5—a major improvement over its predecessor. It also brings some unique capabilities like tab-pinning and hardward acceleration, but only Windows 7 and Vista users need apply.

    Mozilla Firefox 4 :
    Firefox 4 gets Mozilla back into the game. This lean, fast, customizable browser can hold its own against any competitor, and it offers graphics hardware acceleration. My favourite OpenSource Browser. It has some cool features like in-private browsing and has Chatzilla add-on that allows you to attach yourself to irc-network.

    • The Firefox project has changed its names several times. It was renamed from “Phoenix” to “Firebird” because of trademark issues with Phoenix Technologies. Mozilla Firebird then became Mozilla Firefox on February 9, 2004.
    •  There is an on-going belief that “Mozilla” and “Firefox” are the same.  And as we discussed a while back, Firefox, adopted its name in 2004.  Mozilla on the other hand, is the one that represents the company, Mozilla Corporation which develops both Firefox and the Mozilla Suite.  The development of the Mozilla Suite ended in 2005 and is now known as the Sea Monkey.
    • As an open source, the Mozilla Firefox is not entirely free as other would presume.  Some of its elements are covered by EULA (End-Users License Agreement);

     Now its time for some Internet Explorer jokes (The Beating Bag of Browser)


    So it seems that Open-Source softwares Chrome and Firefox have beaten Internet-Explorer black and blue. Finishing on a lighter note, I came
     across a funny T-shirt, which said :-

                                          and all i could do was, just smile :-)