Benchmarking Browsers by Page Load Times

May 06 2010

Recent comparisons of browsers focus on JavaScript speed. There are many ways to measure browser performance, including image load time, reloading from cache, start time, and css rendering speed. While Opera was in the lead a few years ago, it has been branded as the fastest browser again.

snapshot2
You should notice the ad on the left for Chrome. So which one is the fastest? I ran the tests on Linux and Windows.

Benchmark Method

There’s nothing special about using JavaScript to detect when onload is fired by the browser, as long as the browser follows the convention that it is fired after the page is loaded. This used to be an issue I noted to the point of just using Firefox and ignoring other browsers. According to this article, it was fixed 2 years ago. So I took the benchmark from the site and ran it. Not surprisingly, Firefox came out highest on the score. Actually, that’s a negative score. Ideally, the page takes no time to load.

Linux

Google Chrome for Linux is different from the one for Windows, as the benchmark would load the first site for the first time, but could not time it nor reload it. Opera got stuck on MySpace once and kept adding more elements to the page, possibly due to an ad that is not usually loaded locally. I ran these tests on Sabayon Linux with kernel 2.6.30 (I expect 2.6.33 to be faster, since it has been patched with Con Koliva’s kernel enhancements). An interesting note here, not seen in other parts of the result set, is that Arora took longer to load pages on the first time, but was faster on all subsequent reloads. The total score for Arora comes second to Opera. On another note, Firefox with the same extensions, ran faster on Linux than Windows.

Firefox 3.6.3 Arora 0.10.2
Beginning Benchmark Beginning Benchmark
baidu.com.htm baidu.com.htm
692 1461
356 47
348 37
338 49
352 25
345 26
1046 26
356 27
344 28
356 27
Site Average: 453.3 Site Average: 175.3
blogger.com.htm blogger.com.htm
445 2408
285 202
283 205
269 206
272 197
283 197
262 200
271 190
262 194
259 193
Site Average: 289.1 Site Average: 419.2
facebook.com.htm facebook.com.htm
472 515
450 315
447 305
628 304
463 309
476 314
469 319
579 316
459 318
453 319
Site Average: 489.6 Site Average: 333.4
google.com.htm google.com.htm
123 84
109 21
96 20
108 21
102 21
106 21
94 22
91 21
94 22
94 21
Site Average: 101.7 Site Average: 27.4
havenworks.com.htm havenworks.com.htm
3639 4103
2587 217
2793 202
2598 202
2635 203
2614 203
2612 203
2594 204
2585 207
2572 216
Site Average: 2722.9 Site Average: 596
live.com.htm live.com.htm
305 413
153 101
154 79
157 82
152 74
159 84
160 74
166 79
143 76
149 89
Site Average: 169.8 Site Average: 115.1
myspace.com.tom.htm myspace.com.tom.htm
1429 1965
1230 759
1233 778
1235 764
1243 818
1229 753
1227 766
1275 775
1269 774
1224 769
Site Average: 1259.4 Site Average: 892.1
reddit.com.htm reddit.com.htm
604 586
557 399
542 397
541 392
523 404
513 393
766 400
521 385
533 386
525 382
Site Average: 562.5 Site Average: 412.4
wikipedia.org.htm wikipedia.org.htm
670 4110
242 36
232 49
231 34
470 34
232 31
236 30
229 32
227 31
239 49
Site Average: 300.8 Site Average: 443.6
Benchmark Complete
Score 705.455555555556 379.388888888889
First Page Load Average 931 1738.33333333333
Website http://gentoo-portage.com/www-client/mozilla-firefox http://gentoo-portage.com/www-client/arora

Windows

Not surprisingly, the winners on Windows were 32 bit browsers. Aside from the small speed increase due to smaller pointer sizes in 32 bit applications, I think Opera and Chrome are faster browsers, as they advertise. A surprising result is that 64 bit IE ran faster than 64 bit Firefox. I noticed that a while ago, but decided to stick with Firefox because it has add-ons. Iron is a stripped down version of Chrome compiled from source. It should be slightly faster, with the slimmer binary and no personal tracking features. I ran this on Windows 7 Pro 64 bit Version 6.1 (Build 7600).

Firefox 3.6.3 Opera 10.52 Iron 4.0.280 Internet Explorer
Beginning Benchmark Beginning Benchmark Beginning Benchmark Beginning Benchmark
baidu.com.htm baidu.com.htm baidu.com.htm baidu.com.htm
863 720 783 835
409 12 9 42
402 12 7 44
407 11 8 44
385 12 9 38
391 11 8 42
407 12 9 38
403 11 8 41
401 12 9 36
405 11 11 42
Site Average: 447.3 Site Average: 82.4 Site Average: 86.1 Site Average: 120.2
blogger.com.htm blogger.com.htm blogger.com.htm blogger.com.htm
398 220 333 220
143 72 38 90
140 69 38 85
138 73 38 85
140 70 41 86
140 72 37 75
150 70 42 76
140 72 36 72
140 70 41 73
138 72 37 85
Site Average: 166.7 Site Average: 86 Site Average: 68.1 Site Average: 94.7
facebook.com.htm facebook.com.htm facebook.com.htm facebook.com.htm
476 267 273 356
397 289 128 275
368 224 133 282
489 222 129 283
359 223 127 280
356 222 127 287
353 223 125 280
453 232 126 281
363 227 125 288
352 222 128 277
Site Average: 396.6 Site Average: 235.1 Site Average: 142.1 Site Average: 288.9
google.com.htm google.com.htm google.com.htm google.com.htm
135 40 21 70
70 13 11 52
70 14 11 37
70 13 10 46
70 14 10 37
70 14 11 48
70 14 11 43
70 13 11 34
70 14 11 36
70 14 11 50
Site Average: 76.5 Site Average: 16.3 Site Average: 11.8 Site Average: 45.3
havenworks.com.htm havenworks.com.htm havenworks.com.htm havenworks.com.htm
4024 867 736 2325
2863 655 279 2232
2873 659 278 2231
2866 665 280 2259
2852 700 282 2291
2855 663 285 2480
2880 650 276 2297
2930 668 280 2251
2866 666 277 2225
2860 654 279 2233
Site Average: 2986.9 Site Average: 684.7 Site Average: 325.2 Site Average: 2282.4
live.com.htm live.com.htm live.com.htm live.com.htm
254 280 235 125
95 75 38 132
95 78 42 131
107 75 38 125
98 77 39 137
96 77 40 125
96 74 40 104
96 74 40 99
96 74 39 115
95 76 38 103
Site Average: 112.8 Site Average: 96 Site Average: 58.9 Site Average: 119.6
myspace.com.tom.htm myspace.com.tom.htm myspace.com.tom.htm myspace.com.tom.htm
1253 1489 1573 2032
927 1377 3451 1287
928 1541 1131 4756
1535 3978 1902 1749
958 1111 1710 1436
941 1080 1622 1152
928 1103 4948 1918
924 1081 2927 1222
952 1388 1713 1230
930 1049 3505 1260
Site Average: 1027.6 Site Average: 1519.7 Site Average: 2448.2 Site Average: 1804.2
reddit.com.htm reddit.com.htm reddit.com.htm reddit.com.htm
552 246 259 541
425 166 167 463
424 163 165 483
418 164 167 478
608 164 166 478
429 165 166 480
424 165 166 485
422 165 166 481
432 164 167 495
425 164 167 483
Site Average: 455.9 Site Average: 172.6 Site Average: 175.6 Site Average: 486.7
wikipedia.org.htm wikipedia.org.htm wikipedia.org.htm wikipedia.org.htm
1105 726 955 967
166 78 34 260
164 66 33 255
163 66 34 262
163 66 32 253
165 67 32 260
163 71 35 258
164 72 36 256
162 71 33 267
163 71 34 256
Site Average: 257.8 Site Average: 135.4 Site Average: 125.8 Site Average: 329.4
Benchmark Complete
Score 658.677777777778 336.466666666667 382.422222222222 619.044444444444
First Page Load Average 1006.66666666667 539.444444444445 574.222222222222 830.111111111111
Website www.mozilla-x86-64.com/ http://www.opera.com/ http://www.srware.net/en/software_srware_iron.php

Conclusion

Opera beat all other browsers, just with its default settings. A little more tuning of redraw rate and memory use could improve the score. I expect the real results when browsing to deviate. Chrome and Firefox has DNS prefetching, Firefox and Opera have pipelining. To improve that DNS fetch speed in Opera, you can set your system to use OpenDNS to resolve domain names.

Linux

Google Chrome for Linux is different from the one for Windows, as the benchmark would load the first site for the first time, but could not time it nor reload it. Opera got stuck on MySpace once and kept adding more elements to the page, possibly due to an ad that is not usually loaded locally. I ran these tests on Sabayon Linux with kernel 2.6.30 (I expect 2.6.33 to be faster, since it has been patched with Con Koliva’s kernel enhancements). An interesting note here, not seen in other parts of the result set, is that Arora took longer to load pages on the first time, but was faster on all subsequent reloads. The total score for Arora comes second to Opera. On another note, Firefox with the same extensions, ran faster on Linux than Windows.

After the Benchmark (you should decide which browser to use)

I measured the memory use!

mem

It looks Chrome and IE were designed for really cheap laptops. (They can’t run on old computers with Windows 2000.)

No responses yet

Minimal SearchJump on Google Chrome

May 02 2010

I noticed that the icons for search jump had different sizes when I loaded it in Google Chrome.

screenshot.28

When I investigated the issue, I found the icons had 16×16, 32×32, 64×64 image sizes embedded. Chrome happened to load the larger image sizes.

screenshot.32

The final trick was to get Chrome to reload it. I tried restarting the browser and clearing the cache. It finally worked when I uninstalled it from the extensions menu and reinstalled it from the web site.

screenshot.35

Minimal SearchJump is available for Chrome, Firefox, and Opera.
screenshot.34

No responses yet

Coursetree Prerequisites

Apr 05 2010

While working on the tree part of the coursetree project, I ran into the question of how to display the course dependencies. I have written a recursive function that returns a recursive list of course prerequisites:

[u'SE 112', [u'MATH 135', []]]

I wanted to turn it into a diagram of course prerequisites and decided 2 flat lists could retain the data:

  • a list for the levels
  • a list for connections between courses

For example,

tree
would translate to

[[1],[2,6],[3,5,7],[4]]

and

[(1,2),(2,3),(2,5),(3,4),(1,6),(6,7)]

Separating the Levels

We start off with a recursive list representing the tree structure:

[1,[2,6,[3,5,[4,[]],[]],[7,[]]]]

To tackle the problem, I first solved a similar one: flattening a list.
This can easily be done in Scheme, as syntax and type declarations do not get in the way.

(define (flatten sequence)
  (cond ((null? sequence) '()) ; simplest case: ()
        ((list? (car sequence)) ; a list in front: ((1) ...)
         (append (flatten (car sequence))
                 (flatten (cdr sequence))))
        (else (cons (car sequence) ; an atom in front: (1 ...)
                    (flatten (cdr sequence))))))

> (flatten '(1 (2 3)))
(list 1 2 3)

Here’s the same code translated into Python:

car = lambda lst: lst[0]
cdr = lambda lst: lst[1:]
def flatten(seq):
    if not seq:
        return list()
    elif isinstance(car(seq), list):
        return flatten(car(seq)).extend(flatten(cdr(seq)))
    else:
        return [car(seq), flatten(cdr(seq))]

Unfortunately, flatten in Python produces a hypernested structure for flat lists, as (1 2 3) in Scheme is equibalent to (cons 1 (cons 2 (cons 3 empty))) or (1 . (2 . 3)).

>>> flatten([1,2,3])
[1, [2, [3, []]]]

The more serious problem is that it exposes a quirk in Python:

>>> [1].extend([])
>>>

Yes, that means the list disappears after being extended with an empty list . . . one of those unpleasant surprises.
So let’s redefine a flat list in Python:

def flatten(tree):
        result = []
        for node in tree:
                if isinstance(node, list):
                        result.extend(flatten(node))
                else:
                        result.append(node)
        return result

>>> flatten([1,2,3])
[1,2,3]
>>> flatten([1,[2,3]])
[1,2,3]

There is a similarity between this approach and the recursive one: different actions are taken for a list node and other nodes. I used a combined functional and imperative approach to solve the problem:

car = lambda lst: lst[0]
cdr = lambda lst: lst[1:]

'''
depth: returns the maximum nesting level of a list

Given:
    ls, a list

Result:
    an integer
'''

def depth(ls):
    if not ls:
        return 0
    elif isinstance(car(ls),list):
        return max(depth(car(ls))+1,depth(cdr(ls)))
    else:
        return max(1,depth(cdr(ls)))


'''
strip: returns the list elements of a list

Given:
    ls, a list

Result:
    ls, the modified list
'''

def strip(ls, top):
    if top:
        for item in top:
            if item in ls:
                ls.remove(item)
    elif cdr(ls):
        ls = car(ls) + strip(cdr(ls), top) # case like [[1], ...]
    else:
        ls = car(ls)  # case like [[1]]
    return ls


'''
level: returns the top level elements of a list

Given:
    ls, a list

Result:
    a new list
'''

def level(ls):
    if not ls:
        return []
    elif not isinstance(car(ls),list):
        return [car(ls)] + level(cdr(ls))
    else:
        return level(cdr(ls))

'''
levelize: returns a list of lists, each list is contains the items of a level

Given:
    ls, a list

Result:
    a new list
'''

def levelize(ls):
    result = []
    a = list(ls)
    for i in range(2*depth(ls)):
        if not i%2:
            result.append(level(a))
        a = strip(a, level(a))
    return result

>>> levelize([1,[2,6,[3,5,[4,[]],[]],[7,[]]]])
[[1], [2, 6], [3, 5, 7], [4]]

Connecting the Nodes

We start off with a recursive list representing the tree structure, slightly different from the list for separating the levels:

[1,[2,[3,[4],5],6,[7]]]

Again, a mix of recursion and iteration easily solves the problem:

'''
pair: returns a list of lists, each list has an odd and even pair

Given:
        ls, a list

Result:
        a list
'''

def pair(ls):
    result = []
    while ls:
        result.append(ls[0:2])
        ls = ls[2:]
    return result

'''
connect: returns a list of tuples, each tuple represents an edge of the graph

Given:
        ls, a list

Result:
        a list of tuples
'''

def connect(ls):
    result = []
    if cdr(ls):
        if cdr(cdr(ls)):
            for item in pair(ls):
                result.extend(connect(item))
        else:
            second = car(cdr(ls))
            for item in level(second):
                result.append((car(ls),item))
            result.extend(connect(second))
    return result

>>> connect([1,[2,[3,[4],5],6,[7]]])
[(1, 2), (1, 6), (2, 3), (2, 5), (3, 4), (6, 7)]

Hopefully, you’ve had fun reading this article and meanwhile came up with a better way to represent the tree as a flat structure.

No responses yet

SearchJump Updated with Embedded Favicons

Mar 28 2010

Now you will notice that the icons load a lot faster and all at the same time if you are using either the minimal version or the plain version.

Note: the new version has a new namespace, so the old version must be uninstalled manually from Greasemonkey

No responses yet

SearchJump Fixes

Mar 25 2010

Capture

  • instead of clicking twice, the panel hides itself after a single click.
  • by default, the script is only active on Google
  • an icon has been added for Clusty
  • the event listener for toggling the panel is only applied to the hide link
  • visual aspects are more balanced

No responses yet

UW Course Calendar Scraper

Mar 15 2010

I’ve had the idea of making a self-updating, navigable tree of Waterloo courses. This is the first step. (Actually, not the first step for me. It started with Django, which had to do with my last work report’s comparison to Zen Cart. Some credit goes to Thomas Dimson for inspiration. He made the Course Qualifier.) The main idea for this step is to gather all the information to be stored in a database. With that (the idea and plan) begins the coding phase:

from scrapy.item import Item, Field

class UcalendarItem(Item):
    course = Field()
    name = Field()
    desc = Field()
    prereq = Field()
    offered = Field()

I wanted to gather the course (“SE 101”), name (“Introduction to Methods of Software Engineering”), desc (“An introduction …”), prereq (“Software Engineering students only”), and offered (“F”) each as separate fields
snapshot4
In order to do that, I wrote a spider to crawl the page:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from ucalendar.items import UcalendarItem

class UcalendarSpider(BaseSpider):
    domain_name = "uwaterloo.ca"
    start_urls = [
        "http://www.ucalendar.uwaterloo.ca/0910/COURSE/course-SE.html"
    ]

    def parse(self, response):
    hxs = HtmlXPathSelector(response)
    tables = hxs.select('//table[@width="80%"]')
    items = []
        for table in tables:
        item = UcalendarItem()
        item['desc'] = table.select('tr[3]/td/text()').extract()
        item['name'] = table.select('tr[2]/td/b/text()').extract()
        item['course'] = table.select('tr[1]/td/b/text()').re('([A-Z]{2,5} \d{3})')
        item['offered'] = table.select('tr[3]/td').re('.*\[.*Offered: (F|W|S)+,* *(F|W|S)*,* *(F|W|S)*\]')
        item['prereq'] = table.select('tr[5]/td/i/text()').re('([A-Z]{2,5} \d{3})')
            items.append(item)
    return items

SPIDER = UcalendarSpider()

There are several things to note:

  • The prereq field here cannot identify “For Software Engineering students only”. The regular expression only matches the course code.
  • Offered, unlike other fields, can contain more than one item
  • Prereq may be empty

Finally, the spider pipes its results to an output format. CSV format meets the requirements, as it can be inserted into a database.

import csv

class CsvWriterPipeline(object):

    def __init__(self):
        self.csvwriter = csv.writer(open('items.csv', 'wb'))

    def process_item(self, spider, item):
    try:
        self.csvwriter.writerow([item['course'][0], item['name'][0], item['desc'][0], item['prereq'][0], item['offered'][0]])
    except IndexError:
        self.csvwriter.writerow([item['course'][0], item['name'][0], item['desc'][0], ' '.join(item['prereq']), ' '.join(item['offered'])])
        return item

Two gotchas:

  • Because prereq might be empty, there needs to be an exception handler
  • Offered may be variable length. The list needs to be joined to output all of the terms the course is offered.

This part of the project was done in 2 hours with Scrapy. The project can be found in the downloads section.

No responses yet

Debugging with the Scientific Method

Mar 12 2010

Recently, I was assigned to fix bugs on a large project done by 2 previous co-op students. I came across the Debugging section of Steve McConnell’s Code Complete 2 while looking up maintenance in the index. He suggests using the scientific method, summarized here:

  1. Stabilize the error
  2. Gather data
  3. Analyze the data
  4. Brainstorm hypothesises
  5. Carry out the test plan
  6. Did the experiment prove or disprove the hypothesis?

I applied this to a recent debugging problem. The bug is as follows, reported by a tester:

I entered data into an ouctome and assigned a rating.

Then used the Prev Next button to goto the next page and forgot to save the previous data.

Can there be a warning you have unsaved data before I lose this.

this should be for everyone student/employer/staff that enter data.

I started by trying to understand the problem, which I interpreted as:

The user expected to be able to use the prev and next buttons when filling out the forms. There was already a save button, which took them back to a view of all the outcomes instead of the next one. The user thought they had to click save for the page to remember their input.

Understanding the problem in terms of the business goals of the application opens up many possibilities, I chose to make the next and previous buttons save the input and take the user to a different page. This is where the debugging begins. First, I checked the HTML output for the save button and looked up its id in php scripts using grep. Finding the correct place to edit the code was easy. Making the right modification was the hard part. The code looked like this:

$(document).ready(function () {
    $('#form_save_button').click(function () {
        $('#form_submitted').val('0');
        $('#form').submit();
    });
});

Fortunately, I knew enough Javascript and functional programming to understand what the code was doing, but only on the surface. When the button is clicked, the function is executed. It sets a hidden form field to submitted status so that php stores the input in the database. So I wrote what looked like innocent lines of code:

$('#next').click(function () {
    if(document.getElementsByName('rating')[0]
    && document.getElementsByName('info')[0]){
        $('#form').submit();
    }
});
$('#prev').click(function () {
    if(document.getElementsByName('rating')[0]
    && document.getElementsByName('info')[0]){
        $('#form').submit();
    }
});

It worked immediately in Firefox. I thought I was done. Next, I got a report from one of the testers that it did not work for them. I tried it again in IE with the expected result. At this point, superstition came into play. I had previous experience with IE where a javascript error prevented an independent section of code from working. I hypothesized it could be the if statement, because it may not be allowed with jQuery. I tested my hypothesis by taking out the if statement. As no progress was made, I checked the error console in Firefox. It gave an error about $(‘#next’).click on object which does not exist. So I moved the script down below the area where the next link was created. It still did not work in IE. I decided the brute force approach was to learn jQuery and understand exactly what the code was doing. The tutorial was surprisingly short. I made sure my code used the correct jQuery syntax. When I read the documentation on the click method, an idea came to me that IE went away from the page without executing the registered event. There was other evidence supporting this hypothesis in Firefox, clicking next rather than save took many times longer. At this point, I doubted mousedown would work, as I already tried onclick. Luckily, I did look up documentation on mousedown. It looked like it was the correct way to prove or disprove my hypothesis. Switching from click to mousedown did verify my hypothesis. To my surprise, hitting next saved data in IE, with the same speed as hitting save.

No responses yet

Better Safari Books View

Mar 07 2010

Before

screenshot.4

After

screenshot.7

Download it here.

No responses yet

A Look at Sabayon Wallpapers Through the Ages

Feb 21 2010

I started using Sabayon back in 2007 as it was more fun than Windows XP. I decided on it after trying out many other Linux distributions with Live CDs downloaded and burned at school. Gentoo was the fastest Linux distribution, and Sabayon made it easy to install. At that time, it had automatic graphics card configuration to enable Compiz-Fusion. Compiz-Fusion is like the glassy windows you see in Vista, plus many other effects (the most famous being wobbly windws). I suppose that was the reason so many people tried Sabayon at the time. Unfortunately, on the laptop where I installed Sabayon, the graphics card was not capable of these effects (but I did get lightweight transparent windows, highly useful on a small screen). Anyways, here’s what the wallpaper looked like:

sabayon33

At that time, Sabayon (since it was just a way to install Gentoo) was for power users. The dark red and fossilized penguin footprint certainly leaves the rest of the pack behind. I was able to build a system on an Inspiron 1000 that would load the next web page as soon as I clicked on the link. The Reiser file system did the trick for caches, and compile time optimizations made better use of the Pentium 4 CPU.

Later in 2008, I got my own laptop and dual-booted XP with Sabayon 3.4. It’s hard to say which OS I liked better. Linux had trouble with wireless and crashed whenever I launched a program that used OpenGL. Windows came with all the right drivers, except I can’t do anything besides work and play there. I ended up playing a game called NetHack and spending a lot of wasted effort on an English presentation on Marcus Aurelius’s Meditations. I think I simply used Windows more because I liked the desktop colors:

sabayon34

Sabayon had too many splashes of red here and there, but all that changed with the next release.

sabayon35

This was my first glimpse of the future of Sabayon. It (I mean he) was (all) about dreams (besides a surprise for me). I remember this quite well. It was when Sabayon became a binary distribution instead of just a Gentoo box that came with the box (not exactly out of the box as the wireless required ndiswrapper). Let me emphasize that point again. I got a system I couldn’t boot as soon as I tried hibernate.  Thus, the dreams burned to the ground. So I went back to Windows again, this time with Vista. When I had the chance, I made my computer quad boot with XP, Vista, Sabayon, and Kubuntu. All of them had special characteristics and capabilities. Vista was the only one that could suspend and wake with even cpu frequency scaling on both cores. Kubuntu was the only one that supported a full KDE 4 desktop. Now days, KDE 4 is available for all 4 of them. In the February of 2009, I tried Sabayon 4:

sabayon4

partly hoping those CPU frequency scaling bugs were fixed. However, Dr. Scheme had poorly written C code that required custom compile options. I got my work done in Windows. From May until December I used Windows 7 RC. It was where I was able to do everything in an easy and quick way. Most importantly, I was ahead of the technology curve with the elegant UI (now KDE users are beginning to copy it). I installed Sabayon 5.1 again over the past Christmas break:

sabayon51

This one definitely deserved a version bump as the developers switched from -Os to -O2. -O2 is generally faster, but -Os is always smaller, in terms of compiled programs. This was great, as my CPU fan was used a lot less. I rarely hear it now after a few tweaks. Moreover, this is the first time I used Linux continuously over a month for over a year. Congratulations to Sabayon! More details about each release for those who want to compare what truly matters to a user with what Sabayon has provided through its history:

Sabayon Linux 3.3 x86/x86-64: Press Release

Sabayon Linux x86/x86-64 3.4: Stable Release

Sabayon Linux x86/x86-64 3.5: Stable Release

Sabayon Linux x86/x86-64 4 Revision 1 Rolling Release

Sabayon Linux 5.1-r1 GNOME and KDE: Stable release

No responses yet

Broken Bing Query Fixed

Feb 13 2010

SearchJump has been updated with new link for bing. You can download it for Greasemonkey.

No responses yet

« Newer - Older »