Sunday, December 12, 2004
Wednesday, December 1, 2004
CISC Architecture will die?
Optimizations achieved by RISCs are done through compiler assistance. Thus, in the desktop/server market, RISC computers use compilers to translate high-level codes into RISC instructions and the remaining CISC computer uses hardware to translate microcodes. One recent novel variation for the laptop market is the Transmeta Crusoe, which interprets 80x86 instructions and compiles on the fly into internal instructions. The similar way is done on recent Intel Pentium4 architecture with its NetBurst, superscalar etc.
The oldest architecture in computer engineering is stack architecture. In 196, a company called Burroughs delivered the B5000 which is based on stack architecture. Stack architecture is almost obsolete, until Java Virtual Machine came from Sun. Some processors also still use this stack architecture. For example, floating point processing on x86 processors or some embedded microcontrollers.
In the early 1980s, the direction of computer architecture began to swing away from providing high-level hardware support for languages. Ditzel and Patterson analyzed the difficulties encountered by the high-level language archictectures and argued that the answer lay in simpler architectures. In another paper, these authors first discussed the idea of RISC and presented the argument for simpler architecture. Two VAX architects, Clark adn Strecker, rebutted their proposal.
In 1980, Patterson and his colleagues at Berkeley began the project that was to give this architectural approach its name. They built two computers called RISC-I and RISC-II. Because the IBM project on RISC was not widely known or discussed, the role played by the Berkeley group in promoting the RISC approach was critical to the acceptance of the technology. They also built one of the first instruction caches to support hybrid format RISC. It supported 16 and 32-bit instructions in memory but 32-bit in the cache. The Berkeley group went on to build RISC computer targeted toward Smalltalk, and LISP.
In 1981, Hennessy and his colleagues at Stanford University published a description of the Stanford MIPS computer. Efficient pipelining and compiler-assisted scheduling of the pipeline were both important aspects of the original MIPS design. MIPS stood for "Microprocessor without Interlocked Pipeline Stages", reflecting the lack of hardware to stall the pipeline, as the compiler would handle dependencies.
In 1987, a new company named Sun Microsystems started selling computers based on the SPARC architecture. SPARC is a derivative of the Berkeley RISC-II processor. In 1990s, Apple, IBM, and Motorola co-developed a new RISC processor called PowerPC. The processor is now used in every computer made by Apple. The latest PowerPC is G5 which is dual-core RISC processor. As we see, Apple's Mac computers are basically speedier than Intel's x86 architecture, but because Intel is strong in its marketing and always talks about "GigaHertz" clock performance, many people still think that higher clock speed on processors always corresponds to faster processing, which is not always the case. All graphic card producers such as NVidia and ATI also base their graphic coprocessors on RISC architecture, even with more advanced technologies (just for your info, NVidia's GeForce6 GPUs have more transistors than the latest Pentium4 Extreme Edition).
Why then the old technology (CISC in x86) still can survive? The answer is machine-level compatibility. With millions of x86 installed in most PCs worldwide, Intel ofcourse wants to keep it that way. Their RISC project cooperate along with HP (I64 architecture [one of the product is named Itanium], which is based on RISC) could not repeat their success in x86. But, although x86s use CISC from code perspective, they actually are more RISC and CISC as the processors now borrow technologies from RISC, such as pipelining, pre-fetch, super-scalar, branch prediction and parallelism (which is popularized by Intel with terminology: SIMD, such as in MMX, SSE, SSE2 and SSE3 codes).
Monday, November 22, 2004
Zuse is a father of Digital Computer?
He is also the father of programming language. In 1945/1946 he finished his "Plankalküll", the world's first programming language, thus establishing his name as a software pioneer. It was presented to the public in 1972.
It is hard to trace who is truly the father of computer or computing machine. There is no single person credited for the works. From Pascal, Babbage, Turing, Atanasoff, Mauchley and Eckart and Von Neumann. They all contributed to the invention of computer to what we see and use nowadays.
His name also reminds me about a german-linux distro, SuSe. I believe it is named after him or borrows his name.
Thursday, November 18, 2004
Java is going open source?
Another Sun's plan is to make its Solaris 10 open. The copyright is not under GNU, but seems under similar one. Will it take people out of Linux environment? We still need to see this. But, so far Sun's GUI is far behind than Windows, even Linux in term of quality. The new operating system will work on Opteron, Xeon, and UltraSparc.
Tuesday, November 16, 2004
HD (high-definition) video is stalled again
The DVD industry’s track record when it comes to standards is far from perfect. Remember when Sony, Philips, and others went against the DVD Forum to establish theDVD+RW format after the Forum shunned the +RW technology in favor of DVD-RAM and DVD-RW? That fight delayed the widespread adoption of DVD recorders for three years.
Now, the industry must address the move toward HDTV-level 1080i (1080-line, interlaced) resolution for DVD content. Consumers who have spent big money on HDTV monitors are waiting.
A product such as DVD involves many standards issues, including factors such as power and interfaces. But two major issues demand the most attention: the recording format and the video-encoding format. Initially, industry players both inside and outside the DVD Forum considered two approaches. The first involved staying with the existing 9-Gbyte format and using more aggressive encoding to pack a feature-length, high-definition movie onto one disc. The DVD Forum, working on what it terms HD-DVD, favored this conservative approach because it would maintain full compatibility with existing discs. Sony, Matsushita, and others favored a move to “Blu-ray” technology. By changing to a "blue"-wavelength laser, Blu-ray would allow a disc to store 25 Gbytes. However, a player would need two lasers—red andblue—to play both old and new discs.
Now, Toshiba and NEC have produced a compromise, which the DVD Forum has endorsed. The duo has developed a blue laser that can provide higher capacity and also read today’s discs. The compromise reduces capacity to 20 Gbytes, 5 Gbytes fewer than Blu-ray.
Of course, the Blu-ray group wants nothing to do with the compromise. This spring, the group formed its own industry body, the BDA (Blu-ray Disc Association). Hey, if you can’t get your way in this industry, just create your own standards body. The game is clearly about getting your own technology embedded into the next standard, so that you can collect royalties on top of the profit that you make selling your own products.
Meanwhile, a battle raged for a while on the encoding side. The BDA initially appeared to be sticking with the MPEG-2 encoding that existing DVDs use. On the DVD Forum side, Microsoft entered the battle, trying to get its Windows Media technology into the next standard. As of press time, a rare outbreak of logical thinking seems to have taken place: Both the BDA and the DVD Forum have announced plans to support MPEG-2, H.264, and Microsoft’s Windows Media 9.
So, for now, we wait. Hollywood hasn’t weighed in with the standard that it prefers. Meanwhile, Sony has proclaimed that its Playstation 3 will use BDA technology. The BDA is also aggressively pursuing datacentric applications in addition to next-generation DVD video. And manufacturers will soon ship expensive, rewritable BDA products.
Enter China. Chinese companies and the Chinese government already had a major dislike for the DVD technology the the rest of the world uses. Specifically, they didn’t like paying royalties to the companies who had key technologies embedded in the DVD standards. And you can bet that Chinese vendors didn’t want to wait for the high-definition conflict in the rest of the world to play out.
So a standards organization of the Chinese government—SAC (Standardisation Administration of China)—rolled out a new spec, EVD (Enhanced Video Disc). The spec is complete, and vendors are shipping early products. North American vendors, such as LSI Logic, are offering EVD chip sets. High-definition Chinese content is trickling into the Chinese market, with some Hollywood content expected next year.
There’s nothing like governments, multiple international standards bodies, and the collaboration of private industry associations to stave off adoption of a compelling new technology.
Friday, October 22, 2004
RE: Printing dates on digital camera pictures
batchmode so you don't have to manually process every single file.
Here is the link.
http://www.friedemann-schmidt.com/software/exifer/
Thursday, October 14, 2004
Are you one of many Indonesians who love reading classical martial art novels of Kho Ping Ho? If yes, you might have probably known that www.detik.com online has been providing online edition of his novels. At the time I am writing this blog, it is publishing "Harta Karun Jenghis Khan".
Unfortunately, the site provides only a few pages of the current novel every day (although the past novels are achived there), but one album might takes hundreds of these pages. I am a very lazy person in term of reading this site everyday. If you are like me, I have developed two simple scripts to download the whole album therefore becomes readable offline. One thing you need to know that, in order to get a complete set of the novel, you have to wait until the last episod gets published. Currently, the default link for the site you need to pass to geturl.tcl script is http://jkt.detik.com/khopingho/ [ name of the album ] /episode1.shtml
To run the script:
- do: geturl.tcl http://jkt.detik.com/khopingho/[albumname]/episode1.shtml. For example: geturl.tcl http://jkt.detik.com/khopingho/hartakarunjenghiskhan/episode1.shtml
- type: merge.tcl
- Enter the number of episodes (the number of episod*.shtml files you just downloaded, or any number that quite big such as 5000)
- The result will be: albumname.html, and there will be a directory called "images" which will contain all the pictures (if any).
Ok, enough talking, now save the following script as geturl:
#----------- geturl.tcl -------------------
#!/bin/sh -e
# exec tclsh "$0" ${1+"$@"}
if {[lindex $argv 0] == "" } {
puts "$argv0"
exit
}
set url [lindex $argv 0]
set urlpath [file dirname $url]
set logfile "[file tail $urlpath]\.log"
puts "Getting $url ..."
puts "log file: $logfile"
puts "You can see the progress by typing \"tail -F $logfile\""
set par "-nv --force-html --tries=0 --cache=on --convert-links --recursive --accept=shtml --domains=detik.com --no-direct
ories --glob=on -L -p -m --page-requisites -np -nd -o [file tail $urlpath]\.log $url"
set res [ exec sh -c "wget $par" ]
#----------------------- end of geturl.tcl -----------------------
and the following as "merge":
#----------------------- start of erge.tcl ----------------------
#!/usr/bin/tclsh
proc AskAndGet { msg }
puts -nonewline $msg
flush stdout
return [gets stdin]
}
set n [AskAndGet "Number of files: "]
set title [string range [pwd] [string last "/" [pwd]] end]
puts "Title = $title"
set fho [open "merged.html" w]
puts $fho ""
puts $fho ""
puts $fho "\n\n"
puts $fho ""
for {set i 1} {$i <= $n} {incr i} { set fn "episode${i}.shtml" if {[file exists $fn]} { puts "File $fn exists...wait while I merge it ...." set fhi [open $fn r] set line [gets $fhi] set line [ string trim $line ] set line "$line\n" while {![eof $fhi] & ![regexp -nocase {} $line]} { set line [gets $fhi] set line [string trim $line] set line "$line\n" } ## found the start point, now read it until we find while {![eof $fhi] & !([regexp -nocase { } $line] || [regexp -nocase { } $line]) } { set line [gets $fhi] if {[regexp -nocase {Episode belum ada atau sudah habis} $line ] } { continue } if {[regsub -all {\xC2} $line "" line]} { puts "0xC2 found and been removed" #exit } if {[regsub -all {[\x93]} $line {"} line]} { #puts "OPENQUOTE: \{$line\}" } if {[regsub -all "\x94" $line {"} line]} { #puts "CLOSEQUOTE: \{$line\}" } if {[regexp -nocase {http://jkt.detik.com/khopingho/images/(.*).jpg} $line dummy imgname]} { if {![file exists "./images"]} { file mkdir "./images" } set imgname "${imgname}.jpg" if {![file exists "./images/$imgname"]} { puts "Downloading picture: $imgname" set imgurl "http://jkt.detik.com/khopingho/images/$imgname" eval { exec wget -q $imgurl } if {[file exists $imgname]} { exec mv $imgname "./images" } } else { puts "$imgname exists in ./images; not downloaded" } regsub -nocase {http://jkt.detik.com/khopingho/images/(.*).jpg} $line "./images/${imgname }" line } puts $fho $line #puts $line if {[regexp -nocase {} $line] || [regexp -nocase {.*>[ ]*[ ]*TAMAT[ ]*} $line]} {
puts "End of episode $fn"
break
}
}
close $fhi
} else {
; #puts "File $fn does not exists"
}
}
puts $fho "\n"
close $fho
#------------------------ end of merge.tcl -----------------------
Tuesday, October 12, 2004
Have you known that Fermat's last Theorem has been proven in 1994 by Andrew Wiles, a british mathematician working at University of Princeton, USA. He got his Ph.D from Univesity of Cambridge, UK. See http://www-gap.dcs.st-and.ac.uk/~history/HistTopics/Fermat's_last_theorem.html
Pierre de Fermat (1601 - 1665) is a french lawyer (yes, a lawyer!) who pursued maths in his spare time. He is most famous for scribbling a note in the margin of a book by Diophantus that he had discovered a proof that the equation xn+yn = zn has no integer solutions for n>2. He stated "I have discovered a truly marvelous proof of this, which however the margin is not large enough to contain." The proposition, which came to be known as Fermat's last theorem, baffled all attempts to prove it until A. Wiles succeeded in 1995.
For detail, see http://mathworld.wolfram.com/FermatsLastTheorem.html
Monday, October 11, 2004
Interesting articel!. As it says, the future computer will not be based on tiny semiconductor gates, but will be using atoms' spins as its logic. It forecasts that around year 2030, the width of a single wire in a microprocessor will reach the width of single atom and this is the limit, if scientists do not find other ways for computer technologies.
Computers built using this physics of quantum mechanics will be 1 BILLION faster than a Pentium-III PC. So for applications such as decryptography (cracking up secured code) can be made in minutes instead of months or years. Amazing!
In August 2000, researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer developed to date. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by nuclear magnetic resonance (NMR) instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.
Full article can be read at:
http://computer.howstuffworks.com/framed.htm?parent=quantum-computer.htm&url=http://www.amd1.com/quantum_computers.html
Friday, October 8, 2004
Recently, I was browsing to the Internet and google "fastest PC" and got to a site called "Michael's SuperComputers" @ http://www.michaelscomputers.com/. They said it outperforms it's nearest competition 12X faster than Apple's G5 Dual 2.5GHz.
According to the site, the computer is horsepowered with Pentium4 3.6 GHz Extreme edition (I wonder how much this processor cost? In Apr-Jun 2004, I checked the price some online stores it was above $1000 bucks!). The other thing is that they say it uses SATA-X Hyperdrives. I never hear about this SATA-X, but I guess it's a modification of SATA, or the next version of SATA.
For multimedia outputs, they use the top-end VGA card either from NVidia 6800 Ultra or ATI X800E 256 MB and SoundBlaster's Platinum Audigy ZX soundcard for the audio.
I doubt it is the fastest PC now as it lacks PCI-X slots, no information about Firewire2 (800+ Mbps transfer rate), support for dual or multi-core processor (a processor with dual/multiprocessor on single die, which I believe it will be faster than current processor) but you better check it out.
Saturday, October 2, 2004
Have you tried Shoutcast? It is very interesting plug-in for Winamp. Using the plug-in, you can broadcast the MP3 files being player by Winamp to the Internet. Unfortunately, the method it uses is unicast, meaning that every user that is connected to your server will double the bandwidth usage unlike multicast-based servers. If you know any free MP3 multicast server for Linux, please let me know.
I like it when I tried with my Linux, it is very efficient although for Linux there is command line only (well, according to www.shoutcast.com, you can use XMMS player but I tried it with no success). For every user connected to the server, it opens a thread/worker to give service. The listener itself listens at port 8001, while port 8000 is used to give a website that you can see what song is playing, song history, and statistics (for the admin only - you can set the password for this).
If you are interested, you can try it or pay a visit to my experimental site: mlutfi.homelinux.org
Myth 1: NAND Flash is slower than NOR.
The Reality: The performance characteristics of NAND Flash are: fast write (or program) speed , fast erase speed and medium read speed. This makes NAND Flash ideal for low cost, high density, high speed, program/erase applications. Read More Read More
Although NOR Flash offers a slight advantage in random read access times, NAND offers significantly faster program and erase times. For high performance data storage requirements, such as storing digital photos, downloading music and other advanced features popular in today's cell phones, the write/erase speeds of NAND provide a distinct performance advantage. This high performance is also what has made NAND Flash cards so widely used in data storage applications such as digital cameras.
Comparing the time required to perform a typical program and erase sequence for NOR and NAND Flash, for a 64KB erasable unit of memory, NAND outperforms NOR by a wide margin, at 17 milliseconds for NAND, and 2.4 seconds for NOR. In a system application, this difference is large enough to be easily noticed by the user. For the read function, the NAND performance is sufficient to support the system requirement, without a noticeable delay for the user.
Today, many designers build upon the conventional cell phone memory architecture by increasing density of the NOR and PSRAM, and adding NAND Flash to obtain greater performance and capacity for data storage.
Myth 2: NAND is not reliable
The Reality: Just as a hard disk drive is widely accepted with little concerns about bad sectors, NAND works in a similar way in that the controller maps around bad memory areas and error correction code (ECC) is used to correct bit errors. All controllers for NAND Flash have built-in ECC to automatically correct bit errors. Read More Read More
The industry standard is to correct any bit error to a level comparable to that of hard disk drives, or 10-14, which means one bit uncorrectable error every 1014 bits (12.5 terabytes). System designers have long been aware of the benefits of using ECC to detect and correct errors Historically, memory subsystems have used Hamming codee, ECC and Reed Solomon are common in hard drives and CDROMs.
Myth 3: NAND Flash is hard to integrate into a system.
The Reality: NAND Flash has an indirect or I/O-like access. Therefore, it must be accessed through a command sequence instead of through the direct application of an address to the address linesNAND Flash also has internal command, address and data registers. Today, a wide selection of NAND controllers and software drivers are available, making integration into a system relatively simple. Read More Read More
Although this interface may appear more cumbersome than the direct interface of NOR Flash, a notable advantage is the relative ease in upgrading to a higher density chip. Because of the indirect interface, the external pinout, or connection to the host, does not change with the density of the chip. This is similar to the hard disk drive interface in which different densities of hard disk drives could use the same cable interface.
Myth 4: MLC NOR is close to matching NAND capacities.
The Reality: The maximum available density currently available in MLC NOR Flash is 256Mb. The highest available capacity for MLC NAND Flash is currently 2Gb, and the highest available capacity for SLC NAND Flash is 1Gb. Read More Read More
A common method to increase the capacity of NOR Flash is to store multiple charge levels (typically four) enabling the storage of 2 bits in a memory cell, also known as Multi-level Cell (or MLC) NOR. However, by implementing MLC architecture, the effective speed is further reduced and write/erase endurance is also reduced.
Myth 5: MLC NAND won't hold up under extended use.
The Reality: MLC Flash has a different rating for the number of read/write cycles compared to SLC NAND Flash. Currently, SLC Flash is rated to have approximately 100,000 cycles and MLC Flash is rated to have approximately 10,000 cycles. However, if a 256MB MLC card can typically store 250 pictures from a 4-megapixel camera (a conservative estimate), its 10,000 read/write cycles, combined with wear-leveling algorithms in the controller, will enable the user to store and/or view approximately 2.5 million pictures within the expected useful life of the card. That number is so far beyond the average number of photos taken by the typical user that the difference in endurance is not significant for this application. Read More Read More
For those not familiar with the technology, MLC NAND Flash allows each memory cell to store two bits of information, compared to one bit-per-cell for SLC NAND Flash, resulting in a larger capacity and lower bit cost. While SLC NAND may be more appropriate for some specific applications, the difference will not affect the many common consumer applications, including most digital camera users. MLC NAND provides a very competitive level of performance and makes high density NAND cards more affordable, resulting in its growing popularity among consumers.
Myth 6: MLC NAND does not have the performance or endurance to reliably store your digital photos.
The Reality: MLC NAND is rated to have approximately 10,000 cycles, a level that is lower than SLC NAND, but more than sufficient to meet the needs of the vast majority of consumer users. A significant portion of the NAND Flash-based memory cards on the market today are made from MLC NAND, and the continuing rapid growth of this market can be considered an indication that the performance is meeting consumers' needs. Read More Read More
If a 256MB MLC card can typically store 250 pictures from a 4-megapixel camera (a conservative estimate), its 10,000 read/write cycles, combined with wear-leveling algorithms in the controller, will enable the user to store and/or view approximately 2.5 million pictures within the expected useful life of the card. That number is so far beyond the average number of photos taken by the typical user that the difference in endurance is not significant for this application.
Myth 7: MLC NAND does not have high enough performance for streaming video.
The Reality: The performance of MLC NAND is sufficient to support the 6 to 8 Mbits/second, transfer rate needed to store MPEG2 compressed video on a memory card. This works out to approximately 1MB/second. MLC NAND can transfer and write approximately 1.7MB/second.
Myth 8: SLC NAND is a generation ahead of MLC NAND.
The Reality: On Toshiba's roadmap, SLC development leads MLC by only two to three months. Presently, for each new generation, SLC chips are designed with MLC requirements in mind, so there is little lag-time between the two types of NAND. Read More Read More
The real issue is market acceptance, not actual time-to-market for the next generation. Currently, MLC development is well-timed to match market acceptance, with 512MB and 1GB cards widely available today to meet market demand.
Myth 9: The additional circuitry needed for MLC NAND takes up a significant amount of real estate.
The Reality: The circuitry required for MLC NAND is relatively minimal. A 4Gb MLC NAND Flash chip provides approximately 1.95 times greater density than a 2Gb SLC NAND chip. We believe that the more important question to the user is "what density can you get in a chip today?" Presently, the highest density MLC NAND Flash in production is 4Gb, whereas the highest density SLC NAND in mass production is 2Gb. The market demand for ever-higher densities of removable storage makes the lower-cost, higher density MLC card attractive to users and continues to enable new applications to emerge. Read More Read More
The rated storage capacity of 2Gb SLC NAND is 271MB, compared to 529MB for a 4Gb MLC NAND Flash chip, for a density increase of approximately 1.95 times greater.
Myth 10: NAND Flash is a slow storage technology.
The Reality: NAND Flash offers excellent performance for data storage. As a point of comparison, it can offer significantly faster performance and reliability than a hard disk drive, depending on the number and size of files transferred. For a random access of a 2kB file, a typical hard disk drive might take approximately 10ms to retrieve a file, while NAND Flash would take about 0.13ms to retrieve a similar size file. For a comparable write function with the 2kB file, NAND could be as much as 20 times faster. Because it is a solid state memory with no moving parts, NAND flash features a significantly shorter random access time compared to a mechanical hard disk drive.
Thursday, September 30, 2004
I got these links from some magazine (I guess it is MaximumPC). Some of them not that interesting, but some of them are cool!.
http://www.windowsupdate.com
http://www.arstechnica.com
http://www.slashdot.org
http://www.shocknews.com
http://www.theinqurer.net
http://www.penny-arcade.com
http://www.gizmodo.com
http://www.hyperdictionary.com
http://www.wikipedia.com
http://www.maximumpc.com/forums
Wednesday, September 29, 2004
Good question. Based on recent ads, the prices of flash memory have tumbled significantly compared to few months ago. One of online merchant I visited (www.tigerdirect.com) even offers $9.99 for 256 Kingston CompactFlash memory (I guess it is 1x speed).
Depends on your need, the price range can go from US $9.99 up to couple hundreds of dollars for the top memory (such as 4 GB, 40x speed). The speed plays a big role in pricing. The access speed (read/write) factor is similar to CD drive (150 KB/sec is for 1x speed, 300 KB/sec is for 2x speed so on). The new high speed CF sometimes is called CF II.
One thing I still hate to see is there are too many variants and different standard of flash memory. There is CompactFlash I/II, there is Memory stick, MemoryStick Pro, MemoryStick Duo, SmartMedia, SmartDigital, MM and what else, I don't remember. Why don't these people just make one single standard then our life would be better, isn't?
For people who are eager to see and compare the prices, check www.shopping.com, www.dealtime.com, www.mysimon.com, www.techbargains.com, or www.ebay.com. There are many other online shopping comparation portals but I cannot list them all here. Just search them at google, you will see many of them. Comments from previous buyers on these sites are many times useful. The more buyers put comments, the more confidence (or inassurance) you may get. Just check them out!
Introduction
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for the assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for such capabilities as network layer address negotiation and data-compression negotiation. PPP supports these functions by providing an extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to negotiate optional configuration parameters and facilities. In addition to IP, PPP supports other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet.
PPP Components
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP contains three main components:
- A method for encapsulating datagrams over serial links. PPP uses the High-Level Data Link Control (HDLC) protocol as a basis for encapsulating datagrams over point-to-point links. (See Chapter 16, "Synchronous Data Link Control and Derivatives," for more information on HDLC.)
- An extensible LCP to establish, configure, and test the data link connection.
- A family of NCPs for establishing and configuring different network layer protocols. PPP is designed to allow the simultaneous use of multiple network layer protocols.
General Operation
To establish communications over a point-to-point link, the originating PPP first sends LCP frames to configure and (optionally) test the data link. After the link has been established and optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP frames to choose and configure one or more network layer protocols. When each of the chosen network layer protocols has been configured, packets from each network layer protocol can be sent over the link. The link will remain configured for communications until explicit LCP or NCP frames close the link, or until some external event occurs (for example, an inactivity timer expires or a user intervenes).
Physical Layer Requirements
PPP is capable of operating across any DTE/DCE interface. Examples include EIA/TIA-232-C (formerly RS-232-C), EIA/TIA-422 (formerly RS-422), EIA/TIA-423 (formerly RS-423), and International Telecommunication Union Telecommunication Standardization Sector (ITU-T) (formerly CCITT) V.35. The only absolute requirement imposed by PPP is the provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames. PPP does not impose any restrictions regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.
PPP Link Layer
PPP uses the principles, terminology, and frame structure of the International Organization for Standardization (ISO) HDLC procedures (ISO 3309-1979), as modified by ISO 3309:1984/PDAD1 "Addendum 1: Start/Stop Transmission." ISO 3309-1979 specifies the HDLC frame structure for use in synchronous environments. ISO 3309:1984/PDAD1 specifies proposed modifications to ISO 3309-1979 to allow its use in asynchronous environments. The PPP control procedures use the definitions and control field encodings standardized in ISO 4335-1979 and ISO 4335-1979/Addendum 1-1979. The PPP frame format appears in Figure 13-1.
The following descriptions summarize the PPP frame fields illustrated in Figure 13-1:
- Flag—A single byte that indicates the beginning or end of a frame. The flag field consists of the binary sequence 01111110.
- Address—A single byte that contains the binary sequence 11111111, the standard broadcast address. PPP does not assign individual station addresses.
- Control—A single byte that contains the binary sequence 00000011, which calls for transmission of user data in an unsequenced frame. A connectionless link service similar to that of Logical Link Control (LLC) Type 1 is provided. (For more information about LLC types and frame types, refer to Chapter 16.)
- Protocol—Two bytes that identify the protocol encapsulated in the information field of the frame. The most up-to-date values of the protocol field are specified in the most recent Assigned Numbers Request For Comments (RFC).
- Data—Zero or more bytes that contain the datagram for the protocol specified in the protocol field. The end of the information field is found by locating the closing flag sequence and allowing 2 bytes for the FCS field. The default maximum length
of the information field is 1,500 bytes. By prior agreement, consenting PPP implementations can use other values for the maximum information field length. - Frame check sequence (FCS)—Normally 16 bits (2 bytes). By prior agreement, consenting PPP implementations can use a 32-bit (4-byte) FCS for improved error detection.
PPP Link-Control Protocol
The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection. LCP goes through four distinct phases.
First, link establishment and configuration negotiation occur. Before any network layer datagrams (for example, IP) can be exchanged, LCP first must open the connection and negotiate configuration parameters. This phase is complete when a configuration-acknowledgment frame has been both sent and received.
At this point, network layer protocol configuration negotiation occurs. After LCP has finished the link quality determination phase, network layer protocols can be configured separately by the appropriate NCP and can be brought up and taken down at any time. If LCP closes the link, it informs the network layer protocols so that they can take appropriate action.
Finally, link termination occurs. LCP can terminate the link at any time. This usually is done at the request of a user but can happen because of a physical event, such as the loss of carrier or the expiration of an idle-period timer.
Three classes of LCP frames exist. Link-establishment frames are used to establish and configure a link. Link-termination frames are used to terminate a link, and link-maintenance frames are used to manage and debug a link.
These frames are used to accomplish the work of each of the LCP phases.
Summary
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for transporting IP traffic over point-to-point links. PPP also established a standard for assigning and managing IP addresses, asynchronous and bit-oriented synchronous encapsulation, network protocol multiplexing, link configuration, link quality testing, error detection, and option negotiation for added networking capabilities.
PPP provides a method for transmitting datagrams over serial point-to-point links, which include the following three components:
- A method for encapsulating datagrams over serial links
- An extensible LCP to establish, configure, and test the connection
- A family of NCPs for establishing and configuring different network layer protocols
PPP is capable of operating across any DTE/DCE interface. PPP does not impose any restriction regarding transmission rate other than those imposed by the particular DTE/DCE interface in use.
Six fields make up the PPP frame. The PPP LCP provides a method of establishing, configuring, maintaining, and terminating the point-to-point connection.
Review Questions
Q—What are the main components of PPP?
A—Encapsulation of datagrams, LCP, and NCP.
Q—What is the only absolute physical layer requirement imposed by PPP?
A—The provision of a duplex circuit, either dedicated or switched, that can operate in either an asynchronous or synchronous bit-serial mode, transparent to PPP link layer frames.
Q—How many fields make up the PPP frame, and what are they?
A—Six: Flag, Address, Control, Protocol, Data, and Frame Check Sequence.
Q—How many phases does the PPP LCP go through, and what are they?
A—Four: Link establishment, link quality determination, network layer protocol configuration negotiation, and link termination.
another Linux distro coming. It is Ubuntu (unfamiliar with the name? me either, but sounds like an african language). Well, it is based on an african language but I forgot exactly what it means (something about "peace").
Anyway, not like other distros that use KDE, this distro comes with GNOME as its default desktop GUI. I have not tried GNOME desktop for a while, so I cannot comment about the latest GNOME.
For more detail, check this out: http://www.ubuntulinux.org/
Saturday, September 25, 2004
Next generation processors
According to this website STI Cell Processor, a very sophisticated and advanced microprocessor is being jointly designed by 3 giant companies of microelectronics: Sony, Toshiba and IBM. The processor will be used for 21th century applications such as multimedia in living room, game console and other applications that may require broadband access.
The interesting thing from this story is that the broadband access will be more widely used in households not only for entertainment equipments, but also appliances such as smart microwave, smart refrigerator, or smart HVAC (Heat, Ventilation and Air Conditioning). This string of applications will definitely require a powerful microprocessor, not only for general computation as on PCs today but also for many real-time processes in embedded systems.
Many impressing nanoelectronic technology breakthroughs and inventions will be implemented on to this microprocessor. Among other things are SOI (Silicon on Insulator); 65-nm EUV (Extreme Ultra Violet) lithography; Cell architecture (similar to how human brain works); low-k (low dielectric) which means more silicon components (transistors, diodes, etc.) can be packed into a small die; copper wire.
This $400-million project will definitely change the way we think about a "PC" as the processor is considered as "supercomputer-on-a-chip". According to the site, the processor will even be more powerful than IBM's Big Blue supercomputer, one of the fastest computer in this universe. Not only because the processor will do Tera Floating Operation Per Seconds (Tera-FLOPS), but also because it will have about 20 "mini-cores" which work independently but in coherent and can be grouped all together programmably through software.
Another interesting part is that Sony will use this processor for its next generation game console, PS3. If we look at how amazing the NVidia 6800 Ultra performs but yet with much lower FLOPS compared to this Cell processor, you can imagine how good it can be with this "Tera FLOPS" Cell processor.
Utilizing massive data bandwidth and vast floating point capabilities, coupled with a parallel processing architecture, the Cell processor based development environment is expected to deliver quantum-leap innovation to entertainment applications. Cell-based workstations will be designed to expand the platform for creating digital content across future movie and video game entertainment industries.
Many applications, especially in multimedia and gaming, will be very boosted in performance by this chip. Video rendering processes which now might take hours, even days, can be done in minutes or even seconds. Ultra clear super surround sound, hyper-realistic 3D animation and other unimaginable possibilities and capabilities with current processors will be easily achieved by computers using these chips.
I believe the era of "WinTel" will soon dim, and new era of computing will shine. One thing I want to underline is that I believe the first operating system to support this chip is Linux. Believe me!
Tuesday, September 21, 2004
Saturday, September 18, 2004
I just get started to learn more about this architecture. CORBA (Common Object Request Broker Architecture) is a new paradigm of interfacing clients to server(s). I just google it and found out there are many different packages for this, from commercieal packages such as Visibroker to open source ones.
Check this http://orbit-resource.sourceforge.net/ as one of them. It has many links related to ORB.
According to MaximumPC magazine, Alienware's Aurora ALX is the winner, followed closely by Falcon's Mach V. The Aurora is equipped with 2.6 GHz Athlon 64 FX-53 processor, ASUS A8V mobo, VIA K8T800 Pro chipset, 1 GB DDR400 (PC3200) RAM, GeForce 6800 Ultra VGA card, RAID0 HD, SB Audigy 2ZS audio card, 1 CD-RW, 1 DVD 2-layer writer. It is able to run a game like DOOM3 in 1280x1024 pixel resolution with 4AA enabled at 83.4 fps!
Wait a few more years as we see there are new technologies coming to these home PC, such as PCI Express (some mobos and video cards have already used this slot), new ATX form factor, dual or even multicore processors, Microsoft Longhorn OS. Not to mention GigE and Wi-Fi interfaces that some mobos have already used on their product lines.
Saturday, September 11, 2004
Just checked some computer makers' web sites, I've found out that pre-built PCs are cheaper to buy than buy the components separately and assembly them. For instance, a gaming PC 710G from Gateway.com is priced "only" $2000 equipped with Pentium4 3.2 GHz 800 MHz FB, 1 GB DDR-RAM, 250 GB SATA HD, 19" monitor, 256 MB NVidia GForce 5950G Ultra, SoundBlaster Audigy2, etc.
Friday, September 3, 2004
|
|||||||||||||||||||||||||||
Sunday, August 29, 2004
Well, I just solved one little problem during compilation of the new KDE-3.3 with success. There was problem while I was trying to compile kdelibs with options --mt and --with-threading as it stopped with error message something like "invalid ELF header" while compiling kdedoctools. Turned out, it was caused by libpthread.so which is actually a text file contains a pointer to the real shared object file (libpthread.so.22). Seems my new kernel (2.6.8.1) did not like it. After rename it to libpthread.so.1, the compilation went successfuly.
Phew!
Monday, August 23, 2004
Some Interesting Softwares (Free!)
He just touches the surface, of course, and is only delving into some aspects of one particular implementation - but what we're seeing is that folks are gaining a greater understanding of these types of issues from a systems approach...
Apparently, we aren't talking about simple brute-forcing or birthday attacks, either. Antoine Joux just presented a paper on this subject at Crypto 2004 in Santa Barbara - did anyone attend?
Here's another one on MD5, MD4, HAVAL-128, and RIPEMD:
http://eprint.iacr.org/2004/199/
Parallel SSH - What is that? http://www.theether.org/pssh
RE: Some Interesting Softwares (Free!)
http://www.security.nnov.ru/advisories/timesync.asp
He just touches the surface, of course, and is only delving into some aspects of one particular implementation - but what we're seeing is that folks are gaining greater understanding of these types of issues from a systems approach . . .
Apparently, we aren't talking about simple brute-forcing or birthday attacks, either. Antoine Joux just presented a paper on this subject at Crypto 2004 in Santa Barbara - did anyone attend?
Here's another one on MD5, MD4, HAVAL-128, and RIPEMD: http://eprint.iacr.org/2004/199/
Parallel SSH - What is that? http://www.theether.org/pssh
Sunday, August 22, 2004
Right now it does not have anything in it as I just finished installing Apache server on my server. Stay tuned to see new stuff there!
Sunday, August 1, 2004
Thanks.
# A Demo for computing Polynomial. # (C) 2004, The Seeker # ------------ poly.h ----------------------- #ifndef POLYNOM_H #define POLYNOM_H #define MAX_POLYNOM_ELEMENTS 250 typedef struct { int coef; int pow_x; int pow_y; } PType; typedef struct { int n; PType *poly; //POLYNOM *next; } POLYNOM; typedef struct { POLYNOM *prev; POLYNOM polynom; POLYNOM *next; } POLYNOMLIST; #endif # ------------------ end of poly.h ---------------------------- #------- poly.c ------------------ #include#include #include #include "poly.h" const char *delim = " "; int Polynom_GetInput(POLYNOM *plnm) { char buf[1024]; int i,j,n; div_t divr; char *p; int a[3*MAX_POLYNOM_ELEMENTS]; //printf("Enter your polynomial variables (it is sequence of triples: c pow_x pow_y)\n"); printf("Enter Polynoms = "); gets(buf); if (strlen(buf) == 0) { plnm->n = 0; plnm->poly = NULL; return 0; } n = 0; if ((p=strtok(buf, delim)) != NULL) { // found the first input n=1; a[0] = atoi(p); } while ((p=strtok(NULL, delim)) != NULL && n<3*max_polynom_elements)> 0) { /* that's all the input. Now, ensure the input is repetition of triplets */ divr = div(n,3); if (divr.rem == 0) { // yes, it is a sequence of triplet plnm->poly = (PType*)malloc(n/3 * sizeof(PType)); if (plnm->poly == NULL) exit(1); plnm->n = n/3; for(i=0; i poly[j].coef = a[i]; plnm->poly[j].pow_x = a[i+1]; plnm->poly[j].pow_y = a[i+2]; } } return 1; } } return 0; } int Polynom_Copy(const POLYNOM *src, POLYNOM *dest) { // destination polynom should not NULL if (src==NULL || dest==NULL) return 0; dest->poly = (PType *)malloc(src->n * sizeof(PType)); if (dest->poly) { memcpy(dest->poly, src->poly, src->n * sizeof(PType)); return 1; } return 0; } void Polynom_Free(POLYNOM *p) { if (!p) return; if (p->poly) { free(p->poly); p->n = 0; p->poly = NULL; } } void Polynom_Print(const POLYNOM *p) { int i; char strc[50], strx[50], stry[50]; short sign; char strsign[5]; short first_time=1; int coef; if (!p) return; strx[0] = '\0'; stry[0] = '\0'; for(i=0; i n; i++) { coef = p->poly[i].coef; // ignore coef=0 // using temporary var for sign is slightly faster than accessing structure of pointer p if (coef != 0) { sign = ( coef < first_time =" 0;">poly[i].pow_x !=0 || p->poly[i].pow_y !=0)) strcpy(strc, strsign); else sprintf(strc, "%s%0d", strsign, abs(coef)); if (p->poly[i].pow_x == 0) strcpy(strx, ""); else if (p->poly[i].pow_x == 1) strcpy(strx, "x"); else sprintf(strx, "x^%-d", p->poly[i].pow_x); if (p->poly[i].pow_y == 0) strcpy(stry, ""); else if (p->poly[i].pow_y == 1) strcpy(stry, "y"); else sprintf(stry, "y^%-d", p->poly[i].pow_y); //sprintf("%u%s%s"); printf("%s%s%s", strc, strx, stry); } } printf("\n"); } int Polynom_Add(PType *result, const PType P1, const PType P2) { if (result == NULL) return 0; if ((P1.pow_x == P2.pow_x) && (P1.pow_y == P2.pow_y)) { // p1 & p2 have the same order of x,y result->coef = P1.coef + P2.coef; return 1; } else { return 0; } } void Polynom_Simplify(POLYNOM *p1, POLYNOM *p2) { int i,j,k; } int main(int argc, char *argv[]) { POLYNOM p1; int i=0; while (Polynom_GetInput(&p1)) { //for(i=0; i
Friday, May 28, 2004
Linux on the PS2 by John Littler -- As consoles increase in power and alternate operating systems increase in functionality and flexibility, it's ever more attractive to port your favorite free operating system. In the case of Sony's PlayStation 2, the company even encourages it. John Littler explores Linux on the PS2, including hardware, installation, upgrades, alternatives, and game programming.
coLinux: Linux for Windows Without Rebooting by KIVILCIM Hindistan -- Trying Linux just keeps getting easier. Knoppix and other live CDs let you take Linux with you on CD and USB keys, but you have to reboot to run your software. What about Windows users who want to use Linux in conjunction with their existing systems? KIVILCIM Hindistan explores the world of coLinux -- cooperative Linux.
Build Strings with { } by Jerry Peek -- Save typing by expanding strings at the shell prompt. Learn hot to use the {} pattern-expansion characters in this excerpt from Unix Power Tools, 2nd Edition.
Using and Customizing Knoppix by Robert Bernier -- Several Linux distributions boot directly from CD-ROMs. How many are usable in that state? How many are customizable in that state? Klaus Knopper's Knoppix is perhaps the best known of these distributions. Robert Bernier explains how to use Knoppix and how to customize your own self-booting distribution CD.
Variable Manipulation and Output by John Coggeshall -- John Coggeshall covers basic variable manipulation and output, including math operators and strings.
Basic PHP Syntax by John Coggeshall -- John Coggeshall covers basic PHP syntax, including variable usage, variable types, and how to print variables to the web browser.
Introduction to Socket Programming with PHP by Daniel Solin -- Daniel Solin uses a game analogy to show how PHP can be used to exchange data between two computers using network sockets.
Introduction to Socket Programming with PHP by Daniel Solin -- Daniel Solin uses a game analogy to show how PHP can be used to exchange data between two computers using network sockets.
An Introduction to Extreme Programming by chromatic -- When you look at it closely, Extreme Programming isn't really as extreme as it is logical. This introduction shows you the tenets of XP and its relationship to open source methods for writing software.
Tuesday, May 25, 2004
- 2105 - Cisco Systems' Tag Switching Architecture Overview
- 2104 - HMAC: Keyed-hashing for Message Authentication
- 2095 - IMAP/POP AUTHorize Extension for Simple Challenge/Response
- 2085 - HMAC-MD5 IP Authentication with Replay Prevention
- 2083 - PNG (Portable Network Graphics) Spec. version 1.0
- 2082 - RIP2 MD5 Authentication
- 2080 - RIPng for IPv6
- 2069 - An Extension to HTTP: Digest Access Authentication
- 2068 - Hypertext Transfer Protocol -- HTTP/1.1
- 2058 - Remote Authentication Dial In User Service (RADIUS)
- 2046 - Multipurpose Internet Mail Extension (MIME) Part 2: Media Types
- 2045 - Multipurpose Internet Mail Extension (MIME) Part 1: Format of Internet Message Bodies
-