I’ve been trying for some seven years to get the University of New Mexico to let me start offering hard-core cyber-security (i.e. hacking) certification courses, without even a whiff of success until recently. The Marketing Department and Custom Training division surveyed our captive audience, which is pretty sizable: Sandia National Labs, Los Alamos National Labs, Kirtland Air Force Base and three other bases in the state; sizeable state, county and tribal entities; and mega-corps like Intel and HP.
We looked at their interest in ITIL, (ISC)2’s CISSP, ISACA’s CISA, Cisco’s CCNA-Security, GIAC’s GPEN, ISECOM’s OPST, EC-Council’s CEH, and Offensive Security’s OSCP.
One big factor that all clients considered was national and local demand for certified pros here in New Mexico. While many of the job sites aren’t completely forthcoming about how many jobs match a keyword, LinkedIn offers hard numbers for both global and state job openings that request or require particular certifications. LinkedIn reported:
8954 job listings that mention ITIL certification, 26 in New Mexico;
9,036 jobs mentioning the CISSP, 22 in New Mexico,
8,779 jobs mentioning the CISA, 4 in New Mexico,
11,416 job listings that mention the CCNA, 37 in New Mexico
395 jobs mentioning GPEN certification, 1 in New Mexico,
13 jobs mentioning the OPST certification, 0 in New Mexico,
3006 jobs mentioning the CEH, 2 in New Mexico, and
794 jobs mentioning the OSCP, 1 in New Mexico.
Of these, the last four could be called the “hackiest.” ISECOM’s OPST showed very weak numbers both global and locally, so despite some interesting aspects to its practice, none of our audience members showed the slightest interest. The GPEN showed more global-level strength, and attracted some attention from the national facilities, but needs to exist in the ecosystem of GIAC curricula. The OSCP is the truly hard-core hacker’s cert, with its 24-hour examination, but isn’t really “taught” at all; you have to hack and crack your way to a conclusion. It kind of cuts out the middle-man (teachers).
Mentioning the CEH started phones ringing immediately. UNM let me set up an InfoByte session to discuss all these certs and get a feel for what people would pay for. Which cert made ears perk up? The CEH.
I know quite a bit about the organizations and people that were in play in the creation of EC-Council. Despite the extremely tricky test, one individual’s “Run Away From the CEH” propaganda campaign (you can find the various renditions of the article in lots of places in the Internet) succeeded in spreading an early conception that EC-Council is a “diploma mill,” among other accusations. I’ve studied v8 and v9, and find the CEH has definitely matured as a certification, with an exam that is still quite tough, and more tightly focused on current issues and tools than ever.
So finally – finally! – I got the certification and UNM scheduled one section of a Certified Ethical Hacker class. Where I’ve had to struggle to find students to make some classes run, the CEH class made minimum enrollment (5 students) within hours of appearing in the online catalog. And certain entities are already asking about custom and on-site trainings, always a sign of a program with legs.
We’ll see how this first section goes. If interest persists or increases, my next campaign will be urging UNM to become an “official” EC-Council training center (and getting myself EC-Council instructor certified). While the word “official” carries some weight, when you self-study or get “unofficial” training you simply pay $100 extra above the $650 test registration fee.
I’ll have a lot to say about how I studied, what materials I used and my impressions (without details, of course) of the exam. For the moment I’m delighted to have found a pony that can run in this race. Updates will follow.
The following detailed article was provided by Kirk L. Mason:
Computers have used many forms of memory over the years. Some may remember drum, core, and bubble memory. Since the introduction of the Personal Computer in 1981, computer memory has settled into a utilization of semiconductor based forms of ROM and RAM.
ROM and RAM can be found in both asynchronous and synchronous packages.
Asynchronous simply means that there is no synchronization between the address and the data lines of the device: The device is presented with an address and the proper control signals (read/write, Chip Select, Output Enable/Disable) and the data is either presented to or accepted from the data bus.
Synchronous means that everything is synchronized to a common trigger point on a clock signal. Trigger points can be when the clock signal is high, low, or transitioning to a high or low state., or combination of any of the above. All that is required is that the inputs be stabilized by the time the trigger condition is reached.
ROM or Read Only Memory
ROM is not the topic of this page, but I have included a dissertation at the end for those interested.
RAM or Random Access Memory
RAM is used throughout today’s computers.
Static RAM (or SRAM) is fast compared to Dynamic RAM (DRAM) but considerably more complex. SRAM is simple to access; you set an address on the bus, set its control pins (read, write, select) and the data is either presented to or read from the data bus. Static memory is stable as long as the device is given power.
Dynamic RAM (DRAM) is much simpler to manufacture but considerably more complex to operate. Between the read and write cycles the memory controller must “refresh” the data stored in each cell. Modern DRAM devices are able to manage this function automatically. The memory controller simply applies the appropriate command to the DRAM and provides the necessary number of clock pulses. Each refresh cycle requires that all banks of internal memory be idle and will refresh one row of data across all internal memory banks. It is typical that a refresh cycle be conducted each 64ms. Because DRAM was so slow it was common for early PC processors to be given wait states after each memory request to allow the memory subsystem sufficient time to service the memory request.
The smallest capacity RAM device is typically found on the motherboard’s CMOS RAM where the system battery maintains configuration data. CMOS RAM (also called Non-Volatile or NVRAM) ranged in size from the early 64B Motorola devices to the more current 2MB STMicroelectronics devices. A typical CMOS memory map can be seen here.
Medium capacity RAM is found in devices such as the CPU cache (12MB for the current Intel i7-980X), disk drive cache (32MB for the current Seagate Barracuda line) and Video Adapters (up to 6GB for some current high end NVidia Quadro 6000 adapters) .
When people speak of computer memory they are commonly speaking of system memory. This is memory used by the CPU to run applications and services.
Early PCs were limited by Microsoft DOS (Disk Operating System) to 640KB. The original PCs were based on the Intel 8086 processor which was a 16 bit processor with a 20 bit external address bus. These attributes limited the total address capacity to 1MB of which the processor could actively address only 64KB at a time. Today’s processors use a 64 bit architecture and a 64 bit address bus capable of addressing 264 bytes or roughly 18.5 Exabyte (EB) of memory Current Windows platforms limit system memory to 2TB for some 64bit versions of Server2008, but are more typically limited to 16GB for Windows 7 Home Premium and 192GB for versions above that level.
Enough on the history and variations of PC memory, the information to follow concerns the actual hardware of the PC system memory
Computer memory can generally be broken down into :
30 pin SIMMs are electronically the same on each side of the contact (if the PCB was plated on both sides).
30-pin SIMMs have 12 address lines, which can provide a total of 24 address bits. An 8 bit data width (1 Byte wide) leads to an absolute maximum capacity of 16 MB.
Size in BYTES
30-Pin SIMM, Non-Parity
30-Pin SIMM, Parity
72 pin SIMMs are not electronically the same on each side of the contact. This allows for dual sided SIMMs acting as if two single sided SIMMs were adhered back to back. Keep in mind that some motherboards, seeing both sides uniquely, may not accept memory inserted into all memory slots. A dual sided SIMM may be counted as two SIMMs.
72-pin SIMMs have 12 address lines, which can provide a total of 24 address bits. A 32 bit data width (4 Bytes wide) leads to an absolute maximum capacity of 64 MB.
Size in BYTES
72-Pin SIMM, Non-Parity
72-Pin SIMM, Parity/ECC
72 pin SIMMs also include FPM, EDO, and BEDO DRAM:
Fast page mode DRAM is also called FPM DRAM, Page mode DRAM, Fast page mode memory, or Page mode memory.
In page mode, a row of the DRAM can be kept “open” while performing multiple reads or writes so that successive reads or writes within the row do not suffer the delay of precharge and accessing the row. This increases the performance of the system when reading or writing bursts of data.
Extended Data Out or EDO DRAM, sometimes referred to as Hyper Page Mode enabled DRAM, is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. It was 5% faster than Fast Page Mode DRAM, which it began to replace in 1995.
Burst Extended Data Out or BEDO DRAM is created when EDO DRAM is combined with pipelining technology and special latches to allow for much faster access time than regular EDO DRAM. Intel chose not to support BEDO memory in its chipset turning instead to SDRAM. Hence, BEDO memory did not gain much market share.
Synchronous DRAM (SDRAM. Later renamed Single Data Rate SDRAM or SDR SDRAM) operates in lock-step with the system’s clock. The memory clock speed is referred to as the Front Side Bus (FSB). A system clock of 100MHz produced a FSB clock of 100MHz. and one cycle of the system clock yielded one word of data. These memory devices were referred to as PC66, PC100, or PC133 depending on the corresponding FSB.
168-pin DIMMs have two notches. The pin out diagram shows 13 address lines which can be multiplexed into a row and column address of a matrix. It also shows a data bus of 64 bits.
Double Data Rate RAM (DDR-RAM) is very similar to SDR SDRAM except that it operates at a lower voltage and will produce two words or data per clock cycle and is capable of producing subsequent sequential data addresses at a rate of two per clock cycle. A system clock of 100MHz produced a FSB clock of 200MHz and one cycle of the system clock produced two sequential words of data.
184-pin DIMMs have one notch. The pin out diagram shows 13 address lines which can be multiplexed into a row and column address of a matrix. It also shows a data bus of 64 bits.
DDR2 RAM once again doubled the output of the device and lowered the operational voltage. DDR2 devices will produce four words or data per clock cycle and is capable of producing subsequent sequential data addresses at a rate of four per clock cycle. A system clock of 100MHz produced a FSB clock of 400MHz and one cycle of the system clock produced four sequential words of data.
DDR3 RAM doubled again the output of the DDR2 device and again lowered the operational voltage. DDR3 devices will produce eight words or data per clock cycle and is capable of producing subsequent sequential data addresses at a rate of eight per clock cycle. A system clock of 100MHz produced a FSB clock of 800MHz and one cycle of the system clock produced eight sequential words of data.
DDR4 RAM is expected to hit the market in 2012 and be a doubling of the DDR3 device and also lower the operational voltage. DDR4 devices will produce 16 words or data per clock cycle and is capable of producing subsequent sequential data addresses at a rate of 16 per clock cycle. A system clock of 100MHz produced a FSB clock of 1600MHz and one cycle of the system clock produced 16 sequential words of data.
240-pin DIMMs have one notch. The notch location changes with each version of the DDR specification because the operational voltage is reduced. DDR4 is not physically nor electronically compatible with DDR3, which is not compatible with DDR2. The pin out diagram shows 13 address lines which can be multiplexed into a row and column address of a matrix. It also shows a data bus of 64 bits.
DDR Naming Conventions and Characteristics
Front Side Bus
Transactions per cycle
Peak transfer rate
ROM or Read Only Memory
ROM is fast, non-volatile memory which does not require battery backup as does CMOS RAM. Think of the evolution of a Compact Disk (CD). Originally, only the manufacturer had the capability to create a CD. Later, some bright soul figured out a way to sell blank CDs (CD-R) and let you “burn” your own. If you burned it wrong it was discarded and another was burned. Eventually, re-writable CDs (CD-RW) were developed allowing you to erase the CD and burn it again with updated information. This evolution tracked closely to ROM development in the early ‘80s.
ROM was memory which was created at the factory and never changed. Should an error be found or the data need to be changed replacement of the device was required. Computer developers would generate their code (data to be stored on the ROM), test it thoroughly, save it to magnetic (typically) or paper tape, and send it off to the manufacturer. Unless a generous expedite fee were paid, you could expect to receive samples of your ROM in only a few weeks. As with the later CDs, some soul devised a way for the developer to “burn” their own ROM and hence the Programmable ROM or PROM was born. If your code was defective, simply correct the code and burn a fresh PROM, discarding the earlier version. Updated PROMs could be generated in a matter of minutes.
Eventually, Erasable PROMs (E-PROMs) were introduced where the developer could expose the device to an ultra-violet light source and in a matter of a few hours and a good suntan later he was able to burn his updated code and try again. E-PROMs were good for development, but were too expensive for production. Finally, in 1983 the Electrically Erasable PROM (EEPROM) was brought to market. The EEPROM was inexpensive and could be erased and re-burned using a simple application run on the computer itself.
ROM is used in the computer as BIOS for the motherboard and all sub-assemblies. Any time you update a components BIOS or firmware you are electronically erasing and re-burning the devices ROM.