Monday, October 12, 2009

Bresenham Algorithm

#include <stdlib.h>
#include <stdio.h>

//extern int plot(int x, int y);

int plot(int x, int y, int color)
{
    printf("plot(%d, %d, %d)\n", x, y, color);

}


void swap(int *a, int *b)
{
    int tmp;

    tmp = *a;
    *a = *b;
    *b = tmp;
    printf("swap %d with %d\n", a, b);

}



int line(int x1, int y1, int x2, int y2, int color)
{
    int steep;
    int deltax, deltay;
    int e, x, y, y_step;

    steep = (abs(y2 - y1) > (x2 - x1));

    if (steep) {
        swap(&x1, &y1);
        swap(&x2, &y2);
    }
    if (x1 > x2) {
        swap(&x1, &x2);
        swap(&y1, &y2);
    }
    deltax = x2 - x1;
    deltay = abs(y2 - y1);
    e = x1;
    y = y1;
    if (y1 < y2) {
        y_step = 1;
    } else
        y_step = -1;
    for (x = x1; x <= x2; x++) {
        if (steep)
            plot(y, x, color);
        else
            plot(x, y, color);
        e += deltay;
        if (2 * e >= deltax) {
            y += y_step;
            e -= deltax;
        }
    }
    return 0;
}


int main()
{
    int x1, x2, y1, y2, color;

    x1 = 0;
    y1 = 0;
    x2 = 50;
    y2 = 65;
    color = 1;

    line(x1, y1, x2, y2, color);
}
~
~

Sunday, September 27, 2009

Fixing choppy screen on Ubuntu Jaunty

According to a site I googled, XWindow in Ubuntu Jaunty 9.04 has some issue in accessing videocard's memory region.  My video card is NVidia GeForce 8500  GT with native driver from Nvidia.  Kernel is 2.6.30.5 (compiled from source).

I fix this by doing the following:

1) do lspci -v, find "VGA compatible controller" section.

Mine shows as:

04:00.0 VGA compatible controller: nVidia Corporation GeForce 8500 GT (rev a1) (prog-if 00 [VGA controller])
        Subsystem: ASUSTeK Computer Inc. Device 034f
        Flags: bus master, fast devsel, latency 0, IRQ 16
        Memory at fd000000 (32-bit, non-prefetchable) [size=16M]
        Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Memory at fa000000 (64-bit, non-prefetchable) [size=32M]
        I/O ports at ec00 [size=128]
        [virtual] Expansion ROM at febe0000 [disabled] [size=128K]
        Capabilities: <access denied>
        Kernel driver in use: nvidia
        Kernel modules: nvidia, nvidiafb

2) Calculate the accessable memory region (in KB, not MB) by substracting non-prefetchable part  from prefetchable (pick the lower region one).  For example, as above we should compute 256M - 16M, or use Google.  For example, 256 MB = 2^18 KB and 16 MB = 2^14 KB, so (2^18) - (2^14) = 245760 KB

2) as root, edit /etc/X11/xorg.conf.  Find `Section "Device"`
3) Add `VideoRam #`, where # = the result from point 2
For example, mine should now show like below:

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    VideoRam       245760
EndSection


4) Restart XWindow
5) Test with mplayer.  Now the screen changes smoothly with no flicks.

According to some sources, this issue will be fixed in next Ubuntu Jaunty updates.

Tuesday, September 22, 2009

Unfolding a code with Full optimized flags turned on with GCC

Original code:

#include <stdio.h>
#include <math.h>


double a, b;

#define SQR(a)  ((a)*(a))

int main()
{
    double sum;

    a = 0.5;
    b = 0.5;
    sum = sqrt(SQR(sin(a)) + SQR(cos(b)));

    printf("sum = %f\n", sum);
    return 0;
}

CFLAGS is set to "-mtune=nocona -mfpmath=sse -msse3 -O3 -ffast-math"

The source code above, after compiled with GCC (e.g: gcc -S $CFLAGS test.c), gives:

    .file   "ssetest.c"
    .def    ___main;    .scl    2;  .type   32; .endef
    .section .rdata,"dr"
LC1:
    .ascii "sum = %f\12\0"
    .align 8
LC2:
    .long   0
    .long   1071644672
    .text
.globl _main
    .def    _main;  .scl    2;  .type   32; .endef
_main:
    pushl   %ebp
    movl    $16, %eax
    movl    %esp, %ebp
    subl    $24, %esp
    andl    $-16, %esp
    call    __alloca
    call    ___main
    fldl    LC2
    movl    $LC1, (%esp)
    fld     %st(0)
    fstl    _a
    fstl    _b
    fxch    %st(1)
    fsin
    fxch    %st(1)
    fcos
    fxch    %st(1)
    fstpl   -8(%ebp)
    movsd   -8(%ebp), %xmm2
    fstpl   -8(%ebp)
    movsd   -8(%ebp), %xmm0
    mulsd   %xmm2, %xmm2
    mulsd   %xmm0, %xmm0
    addsd   %xmm0, %xmm2
    sqrtsd  %xmm2, %xmm1
    movsd   %xmm1, 4(%esp)
    call    _printf
    xorl    %eax, %eax
    leave
    ret
    .comm   _a, 16   # 8
    .comm   _b, 16   # 8
    .def    _printf;    .scl    3;  .type   32; .endef

The code is so efficient.  fsin/fcos does the sine computation in CPU hardware (no emulation).  It also utilize MMX registers (xmm0, xmm1, xmm2) so memory movement is minimum.

Tuesday, September 15, 2009

Reasons Why Android Phones will win the war

Apple's iPhone is definitely now the winner in the criteria of slickness or coolness.  But one of its biggest downsides is it is tied to single provider (AT&T  in US) which charges too much ($30 for its data plan in addition to existing voice plan).

From developer's perspective (at least me), developing an application on iPhone is not that fun.  First, it uses a proprietary O/S which does much control on the device.  Secondly, Objective C used in the SDK is kind of weird to absorb from a person who's used to C/C++ or Java for beginning.  Also, the SDK only works on OS/X (sorry Linux/windows, you're forgotten!).  Another biggest downside: we cannot test our developed software on a real device, unless we pay $99 to Apple.

Meanwhile, Google Android is opensource and even based on Linux, the king of opensources.  Another thing is, it uses Java language for its application development.  The SDK supports all platforms (well, except OpenSolaris maybe?).  So far, I sense very similarities between both SDKs, though (I think because both of them follow Design Patterns paradigm?).  One biggest winning point: no fees required to test our software on a real device/handset.  This will drive a lot more programmers (especially from third word countries, where $99 is beyound their reach) to develop applications.

Why Apple should be very worried now? First, a bunch of chinese/taiwanese vendors (HTC, Huawei,etc.) are jumping into the bandwagon.  So far, HTC, Huawei, LG, Motorola, Samsung, Acer, Philips, Sony Ericsson, are in or planning to join in.  If Nokia joins the group, that'll be the scariest thing Apple will have its nightmare.

Thursday, September 10, 2009

Ooma slows down data traffic

It's been a month since I bought the Ooma VOIP system. It's been working fine, except with some issues, like the scout hang (need reset it). My configuration is to put Ooma hub right after DSL modem, so my wireless router is connected to Ooma hub. This is per suggestion in its manual.

I was curious to see how data traffic was affected. My nominal DSL speed is 6 Mbps, and when the router was connected directly to dsl modem, I could get more than 5 Mbps average. But when the router is connected behind Ooma, I could only get below 5 Mbps. It is not strange, as Ooma was acting as a NAT router too, hence added additional overhead.

The settings showed on setup.ooma.com (an alias to its internal IP address) look very similar to regular NAT router. The default internal IP address range it uses is in 172.27.35.*

Ports used:
Telephony: 50 - Running
DNS: 45 - Running
Web Server: 47 - Running
VPN: 356 - Running
Free: 37008

Next experiment I will do is to put packet sniffer on its "modem" port to figure out how it actually works.

Sunday, September 6, 2009

How much power does a Macbook draw?

To find out how much power an Apple's Macbook laptop withdraws power, I use EZ Kill-A-Watt power meter (got it from Costco for about $28). Select "Watt" mode and connect Macbook power cord to this device.

  • When the laptop is in standby mode (lid is closed) and battery is full (or at least 94% full), it withdraws 4 Watt
  • With power cord detached from laptop, the power supply withdraws 0 watt
  • During booting, the laptop withdraws max power, which is 45 watts
  • During normal mode (casual use), in average it withdraws 27-28 watts