Finite State Machine

Definition

A machine consisting of a set of states, a start state, an input, and a transition function that maps input and current states to a next state. Machine begins in the start state with an input. It changes to new states depending on the transition function. The transition function depends on current states and inputs. The output of the machine depends on input and/or current state.

There are two types of FSMs which are popularly used in the digital design. They are

  • Moore machine
  • Mealy machine
Moore machine

In Moore machine the output depends only on current state.The advantage of the Moore model is a simplification of the behavior.

Mealy machine

In Mealy machine the output depend on both current state and input.The advantage of the Mealy model is that it may lead to reduction of the number of states.

In both models the next state depends on current state and input. Some times designers use mixed models. States will be encoded for representing a particular state.

Representation of a FSM

A FSM can be represented in two forms:
  • Graph Notation
  • State Transition Table
Graph Notation
  • In this representation every state is a node. A node is represented using a circular shape and the state code is written within the circular shape.
  • The state transitions are represented by an edge with arrow head. The tail of the edge shows current state and arrow points to next state, depending on the input and current state. The state transition condition is written on the edge.
  • The initial/start state is sometime represented by a double lined circular shape, or a different colour shade.
The following image shows the way of graph notation of FSM. The codes 00 and 11 are the state codes. 00 is the value of initial/starting/reset state. The machine will start with 00 state. If the machine is reseted then the next state will be 00 state.


State Transition Table

The State Transition Table has the following columns:
  • Current State: Contains current state code
  • Input: Input values of the FSM
  • Next State: Contains the next state code
  • Output: Expected output values
An example of state transition table is shown below.


Mealy FSM

In Mealy machine the output depend on both current state and input.The advantage of the Mealy model is that it may lead to reduction of the number of states.


The block diagram of the Mealy FSM is shown above. The output function depends on input also. The current state function updates the current state register (number of bits depends on state encoding used).


The above FSM shows an example of a Mealy FSM, the text on the arrow lines show (condition)/(output). 'a' is the input and 'x' is the output.

Moore FSM

In Moore machine the output depends only on current state.The advantage of the Moore model is a simplification of the behavior.


The above figure shows the block diagram of a Moore FSM. The output function doesn't depend on input. The current state function updates the current state register.


The above FSM shows an example of a Moore FSM. 'a' is the input. Inside every circle the text is (State code)/(output). Here there is only one output, in state '11' the output is '1'.

In both the FSMs the reset signal will change the contents of current state register to initial/reset state.

State Encoding

In a FSM design each state is represented by a binary code, which are used to identify the state of the machine. These codes are the possible values of the state register. The process of assigning the binary codes to each state is known as state encoding.
The choice of encoding plays a key role in the FSM design. It influences the complexity, size, power consumption, speed of the design. If the encoding is such that the transitions of flip-flops (of state register) are minimized then the power will be saved. The timing of the machine are often affected by the choice of encoding.
The choice of encoding depends on the type of technology used like ASIC, FPGA, CPLD etc. and also the design specifications.

State encoding techniques

The following are the most common state encoding techniques used.
  • Binary encoding
  • One-hot encoding
  • Gray encoding
In the following explanation assume that there are N number of states in the FSM.

Binary encoding

The code of a state is simply a binary number. The number of bits is equal to log2(N) rounded to next natural number. Suppose N = 6, then the number of bits are 3, and the state codes are:
S0 - 000
S1 - 001
S2 - 010
S3 - 011
S4 - 100
S5 - 101

One-hot encoding
In one-hot encoding only one bit of the state vector is asserted for any given state. All other state bits are zero. Thus if there are N states then N state flip-flops are required. As only one bit remains logic high and rest are logic low, it is called as One-hot encoding. If N = 5, then the number of bits (flip-flops) required are 5, and the state codes are:
S0 - 00001
S1 - 00010
S2 - 00100
S3 - 01000
S4 - 10000

To know more about one-hot encoding click here.

Gray encoding
Gray encoding uses the Gray codes, also known as reflected binary codes, to represent states, where two successive codes differ in only one digit. This helps is reducing the number of transition of the flip-flops outputs. The number of bits is equal to log2(N) rounded to next natural number. If N = 4, then 2 flip-flops are required and the state codes are:
S0 - 00
S1 - 01
S2 - 11
S3 - 10

Designing a FSM is the most common and challenging task for every digital logic designer. One of the key factors for optimizing a FSM design is the choice of state coding, which influences the complexity of the logic functions, the hardware costs of the circuits, timing issues, power usage, etc. There are several options like binary encoding, gray encoding, one-hot encoding, etc. The choice of the designer depends on the factors like technology, design specifications, etc.

One-hot Encoding

Designing a FSM is the most common and challenging task for every digital logic designer. One of the key factors for optimizing a FSM design is the choice of state coding, which influences the complexity of the logic functions, the hardware costs of the circuits, timing issues, power usage, etc. There are several options like binary encoding, gray encoding, one-hot encoding, etc. The choice of the designer depends on the factors like technology, design specifications, etc.

One-hot encoding


In one-hot encoding only one bit of the state vector is asserted for any given state. All other state bits are zero. Thus if there are n states then n state flip-flops are required. As only one bit remains logic high and rest are logic low, it is called as One-hot encoding.
Example: If there is a FSM, which has 5 states. Then 5 flip-flops are required to implement the FSM using one-hot encoding. The states will have the following values:
S0 - 10000
S1 - 01000
S2 - 00100
S3 - 00010
S4 - 00001

Advantages

  • State decoding is simplified, since the state bits themselves can be used directly to check whether the FSM is in a particular state or not. Hence additional logic is not required for decoding, this is extremely advantageous when implementing a big FSM.
  • Low switching activity, hence resulting low power consumption, and less prone to glitches.
  • Modifying a design is easier. Adding or deleting a state and changing state transition equations (combinational logic present in FSM) can be done without affecting the rest of the design.
  • Faster than other encoding techniques. Speed is independent of number of states, and depends only on the number of transitions into a particular state.
  • Finding the critical path of the design is easier (static timing analysis).
  • One-hot encoding is particularly advantageous for FPGA implementations. If a big FSM design is implemented using FPGA, regular encoding like binary, gray, etc will use fewer flops for the state vector than one-hot encoding, but additional logic blocks will be required to encode and decode the state. But in FPGA each logic block contains one or more flip-flops (click here to know why?) hence due to presence of encoding and decoding more logics block will be used by regular encoding FSM than one-hot encoding FSM.
Disadvantages
  • The only disadvantage of using one-hot encoding is that it required more flip-flops than the other techniques like binary, gray, etc. The number of flip-flops required grows linearly with number of states. Example: If there is a FSM with 38 states. One-hot encoding requires 38 flip-flops where as other require 6 flip-flops only.

Microprocessor Interview Questions - 5

1. Why are program counter and stack pointer 16-bit registers?
Answer


2. What happens during DMA transfer?
Answer

3. Define ISR.
Answer

4. Define PSW.
Answer

5. What are the execution modes available in x86 processors?
Answer

6. What is meant real mode?
Answer

7. What is protected mode?
Answer

8. What is virtual 8086 mode?
Answer

9. What is unreal mode?
Answer

10. What is the difference between ISR and a function call?
Answer

SoC : System-On-a-Chip

System-on-a-chip (SoC) refers to integrating all components of an electronic system into a single integrated circuit (chip). A SoC can include the integration of:

  • Ready made sub-circuits (IP)
  • One or more microcontroller, microprocessor or DSP core(s)
  • Memory components
  • Sensors
  • Digital, Analog, or Mixed signal components
  • Timing sources, like oscillators and phase-locked loops
  • Voltage regulators and power management circuits
The blocks of SoC are connected by a special bus, such as the AMBA bus. DMA controllers are used for routing the data directly between external interfaces and memory, by-passing the processor core and thereby increasing the data throughput of the SoC. SoC is widely used in the area of embedded systems. SoCs can be fabricated by several technologies, like, Full custom, Standard cell, FPGA, etc. SoC designs are usually power and cost effective, and more reliable than the corresponding multi-chip systems. A programmable SoC is known as PSoC.

Advantages of SoC are:
  • Small size, reduction in chip count
  • Low power consumption
  • Higher reliability
  • Lower memory requirements
  • Greater design freedom
  • Cost effective
Design Flow

SoC consists of both hardware and software( to control SoC components). The aim of SoC design is to develop hardware and software in parallel. SoC design uses pre-qualified hardware, along with their software (drivers) which control them. The hardware blocks are put together using CAD tools; the software modules are integrated using a software development environment. The SoC design is then programmed onto a FPGA, which helps in testing the behavior of SoC. Once SoC design passes the testing it is then sent to the place and route process. Then it will be fabricated. The chips will be completely tested and verified.

VLSI Interview Questions - 6

1. Why is NAND gate preferred over NOR gate for fabrication?
Answer


2. Which transistor has higher gain: BJT or MOSFET and why?
Answer

3. Why PMOS and NMOS are sized equally in a transmission gates?
Answer

4. What is SCR?
Answer

5. In CMOS digital design, why is the size of PMOS is generally higher than that of the NMOS?
Answer

6. What is slack?
Answer

7. What is latch up?
Answer

8. Why is the size of inverters in buffer design gradually increased? Why not give the output of a circuit to one large inverter?
Answer

9. What is Charge Sharing? Explain the Charge Sharing problem while sampling data from a Bus?
Answer

10. What happens to delay if load capacitance is increased?
Answer

Complex Programmable Logic Device

A complex programmable logic device (CPLD) is a semiconductor device containing programmable blocks called macro cell, which contains logic implementing disjunctive normal form expressions and more specialized logic operations. CPLD has complexity between that of PALs and FPGAs. It can has up to about 10,000 gates. CPLDs offer very predictable timing characteristics and are therefore ideal for critical control applications.

Applications

  • CPLDs are ideal for critical, high-performance control applications.
  • CPLD can be used for digital designs which perform boot loader functions.
  • CPLD is used to load configuration data for an FPGA from non-volatile memory.
  • CPLD are generally used for small designs, for example, they are used in simple applications such as address decoding.
  • CPLDs are often used in cost-sensitive, battery-operated portable applications, because of its small size and low-power usage.
Architecture

A CPLD contains a bunch of programmable functional blocks (FB) whose inputs and outputs are connected together by a global interconnection matrix. The global interconnection matrix is reconfigurable, so that we can change the connections between the FBs. There will be some I/O blocks which allow us to connect CPLD to external world. The block diagram of architecture of CPLD is shown below.


The programmable functional block typically looks like the one shown below. There will be an array of AND gates which can be programed. The OR gates are fixed. But each manufacturer has their way of building the functional block. A registered output can be obtained by manipulating the feedback signals obtained from the OR ouputs.


CPLD Programming


The design is first coded in HDL (Verilog or VHDL), once the code is validated (simulated and synthesized). During synthesis the target device(CPLD model) is selected, and a technology-mapped net list is generated. The net list can then be fitted to the actual CPLD architecture using a process called place-and-route, usually performed by the CPLD company's proprietary place-and-route software. Then the user will do some verification processes. If every thing is fine, he will use the CPLD, else he will reconfigure it.

Introduction to Digital Logic Design

>> Introduction
>> Binary Number System
>> Complements
>> 2's Complement vs 1's Complement
>> Binary Logic
>> Logic Gates


Introduction

The fundamental idea of digital systems is to represent data in discrete form (Binary: ones and zeros) and processing that information. Digital systems have led to many scientific and technological advancements. Calculators, computers, are the examples of digital systems, which are widely used for commercial and business data processing. The most important property of a digital system is its ability to follow a sequence of steps to perform a task called program, which does the required data processing. The following diagram shows how a typical digital system will look like.


Representing the data in ones and zeros, i.e. in binary system is the root of the digital systems. All the digital system store data in binary format. Hence it is very important to know about binary number system. Which is explained below.

Binary Number System

The binary number system, or base-2 number system, is a number system that represents numeric values using two symbols, usually 0 and 1. The base-2 system is a positional notation with a radix of 2. Owing to its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by all computers. Suppose we need to represent 14 in binary number system.
14 - 01110 - 0x24 + 1x23 + 1x22 + 1x21 + 0x20
similarly,
23 - 10111 - 1x24 + 0x23 + 1x22 + 1x21 + 1x20

Complements

In digital systems, complements are used to simplify the subtraction operation. There are two types of complements they are:
The r's Complement
The (r-1)'s Complement

Given:

  • N a positive number.
  • r base of the number system.
  • n number of digits.
  • m number of digits in fraction part.
The r's complement of N is defined as rn - N for N not equal to 0 and 0 for N=0.

The (r-1)'s Complement of N is defined as rn - rm - N.

Subtraction with r's complement:

The subtraction of two positive numbers (M-N), both are of base r. It is done as follows:
1. Add M to the r's complement of N.
2. Check for an end carry:
(a) If an end carry occurs, ignore it.
(b) If there is no end carry, the negative of the r's complement of the result obtained in step-1 is the required value.

Subtraction with (r-1)'s complement:

The subtraction of two positive numbers (M-N), both are of base r. It is done as follows:
1. Add M to the (r-1)'s complement of N.
2. Check for an end carry:
(a) If an end carry occurs, add 1 to the result obtained in step-1.
(b) If there is no end carry, the negative of the (r-1)'s complement of the result obtained in step-1 is the required value.

For a binary number system the complements are: 2's complement and 1's complement.

2's Complement vs 1's Complement

The only advantage of 1's complement is that it can be calculated easily, just by changing 0s into 1s and 1s into 0s. The 2's complement is calculated in two ways, (i) add 1 to the 1's complement of the number, and (ii) leave all the leading 0s in the least significant positions and keep first 1 unchanged, and then change 0s into 1s and 1s into 0s.

The advantages of 2's complement over 1's complement are:
(i) For subtraction with complements, 2's complement requires only one addition operation, where as for 1's complement requires two addition operations if there is an end carry.
(ii) 1's complement has two arithmetic zeros, all 0s and all 1s.

Binary Logic

Binary logic contains only two discrete values like, 0 or 1, true or false, yes or no, etc. Binary logic is similar to Boolean algebra. It is also called as boolean logic. In boolean algebra there are three basic operations: AND, OR, and NOT.
AND: Given two inputs x, y the expression x.y or simply xy represents "x AND y" and equals to 1 if both x and y are 1, otherwise 0.
OR: Given two inputs x, y the expression x+y represents "x OR y" and equals to 1 if at least one of x and y is 1, otherwise 0.
NOT: Given x, the expression x' represents NOT(x) equals to 1 if x is 0, otherwise 0. NOT(x) is x complement.

Logic Gates

A logic gate performs a logical operation on one or more logic inputs and produces a single logic output. Because the output is also a logic-level value, an output of one logic gate can connect to the input of one or more other logic gates. The logic gate use binary logic or boolean logic. AND, OR, and NOT are the three basic logic gates of digital systems. Their symbols are shown below.


AND and OR gates can have more than two inputs. The above diagram shows 2 input AND and OR gates. The truth tables of AND, OR, and NOT logic gates are as follows.

Microprocessor Interview Questions - 4

1. What is the size of flag register of 8086 processor?
Answer


2. How many pin IC 8086 is?
Answer

3. What is the Maximum clock frequency of 8086?
Answer

4. What is meant by instruction cycle?
Answer

5. What is Von Neumann architecture?
Answer

6. What is the main difference between 8086 and 8085?
Answer

7. What does EAX mean?
Answer

8. What type of instructions are available in instruction set of 8086?
Answer

9. How is Stack Pointer affected when a PUSH and POP operations are performed?
Answer

10. What are SIM and RIM instructions?
Answer

Microprocessor Interview Questions - 3

1. How many bits processor is 8086?
Answer


2. What are the sizes of data bus and address bus in 8086?
Answer

3. What is the maximum addressable memory of 8086?
Answer

4. How are 32-bit addresses stored in 8086?
Answer

5. What are the 16-bit registers that are available in 8086?
Answer

6. What are the different types of address modes available in 8086?
Answer

7. How many flags are available in flag register? What are they?
Answer

8. Explain the functioning of IP (instruction pointer).
Answer

9. What are the various types of interrupts present in 8086?
Answer

10. How many segments are present in 8086? What are they?
Answer

Digital Design Interview Questions - 5

1. Expand the following: PLA, PAL, CPLD, FPGA.
Answer


2. Implement the functions: X = A'BC + ABC + A'B'C' and Y = ABC + AB'C using a PLA.
Answer

3. What are PLA and PAL? Give the differences between them.
Answer

4. What is LUT?
Answer

5. What is the significance of FPGAs in modern day electronics? (Applications of FPGA.)
Answer

6. What are the differences between CPLD and FPGA.
Answer

7. Compare and contrast FPGA and ASIC digital designing.
Answer

8. Give True or False.
(a) CPLD consumes less power per gate when compared to FPGA.
(b) CPLD has more complexity than FPGA
(c) FPGA design is slower than corresponding ASIC design.
(d) FPGA can be used to verify the design before making a ASIC.
(e) PALs have programmable OR plane.
(f) FPGA designs are cheaper than corresponding ASIC, irrespective of design complexity.
Answer

9. Arrange the following in the increasing order of their complexity: FPGA,PLA,CPLD,PAL.
Answer

10. Give the FPGA digital design cycle.
Answer

Programmable Logic Array

In Digital design, we often use a device to perform multiple applications. The device configuration is changed (reconfigured) by programming it. Such devices are known as programmable devices. It is used to build reconfigurable digital circuits. The following are the popular programmable device

  • PLA - Programmable Logic Array
  • PAL - Programmable Array Logic
  • CPLD - Complex Programmable Logic Device (Click here for more details)
  • FPGA - Field-Programmable Gate Array (Click here for more details)

PLA: Programmable Logic Array is a programmable device used to implement combinational logic circuits. The PLA has a set of programmable AND planes, which link to a set of programmable OR planes, which can then be conditionally complemented to produce an output. This layout allows for a large number of logic functions to be synthesized in the sum of products canonical forms.

Suppose we need to implement the functions: X = A'BC + ABC + A'B'C' and Y = ABC + AB'C. The following figures shows how PLA is configured. The big dots in the diagram are connections. For the first AND gate (left most), A complement, B, and C are connected, which is first minterm of function X. For second AND gate (from left), A, B, and C are connected, which forms ABC. Similarly for A'B'C', and AB'C. Once the minterms are implemented. Now we have to combine them using OR gates to the functions X, and Y.


One application of a PLA is to implement the control over a data path. It defines various states in an instruction set, and produces the next state (by conditional branching).

Note that the use of the word "Programmable" does not indicate that all PLAs are field-programmable; in fact many are mask-programmed during manufacture in the same manner as a ROM. This is particularly true of PLAs that are embedded in more complex and numerous integrated circuits such as microprocessors. PLAs that can be programmed after manufacture are called FPLA (Field-programmable logic array).

FPGA vs ASIC

Definitions

FPGA: A Field-Programmable Gate Array (FPGA) is a semiconductor device containing programmable logic components called "logic blocks", and programmable interconnects. Logic blocks can be programmed to perform the function of basic logic gates such as AND, and XOR, or more complex combinational functions such as decoders or mathematical functions. For complete details click here.

ASIC: An application-specific integrated circuit (ASIC) is an integrated circuit designed for a particular use, rather than intended for general-purpose use. Processors, RAM, ROM, etc are examples of ASICs.

FPGA vs ASIC

Speed
ASIC rules out FPGA in terms of speed. As ASIC are designed for a specific application they can be optimized to maximum, hence we can have high speed in ASIC designs. ASIC can have hight speed clocks.

Cost
FPGAs are cost effective for small applications. But when it comes to complex and large volume designs (like 32-bit processors) ASIC products are cheaper.

Size/Area
FPGA are contains lots of LUTs, and routing channels which are connected via bit streams(program). As they are made for general purpose and because of re-usability. They are in-general larger designs than corresponding ASIC design. For example, LUT gives you both registered and non-register output, but if we require only non-registered output, then its a waste of having a extra circuitry. In this way ASIC will be smaller in size.

Power
FPGA designs consume more power than ASIC designs. As explained above the unwanted circuitry results wastage of power. FPGA wont allow us to have better power optimization. When it comes to ASIC designs we can optimize them to the fullest.

Time to Market
FPGA designs will till less time, as the design cycle is small when compared to that of ASIC designs. No need of layouts, masks or other back-end processes. Its very simple: Specifications -- HDL + simulations -- Synthesis -- Place and Route (along with static-analysis) -- Dump code onto FPGA and Verify. When it comes to ASIC we have to do floor planning and also advanced verification. The FPGA design flow eliminates the complex and time-consuming floor planning, place and route, timing analysis, and mask / re-spin stages of the project since the design logic is already synthesized to be placed onto an already verified, characterized FPGA device.


Type of Design
ASIC can have mixed-signal designs, or only analog designs. But it is not possible to design them using FPGA chips.

Customization
ASIC has the upper hand when comes to the customization. The device can be fully customized as ASICs will be designed according to a given specification. Just imagine implementing a 32-bit processor on a FPGA!

Prototyping
Because of re-usability of FPGAs, they are used as ASIC prototypes. ASIC design HDL code is first dumped onto a FPGA and tested for accurate results. Once the design is error free then it is taken for further steps. Its clear that FPGA may be needed for designing an ASIC.

Non Recurring Engineering/Expenses
NRE refers to the one-time cost of researching, designing, and testing a new product, which is generally associated with ASICs. No such thing is associated with FPGA. Hence FPGA designs are cost effective.

Simpler Design Cycle
Due to software that handles much of the routing, placement, and timing, FPGA designs have smaller designed cycle than ASICs.

More Predictable Project Cycle
Due to elimination of potential re-spins, wafer capacities, etc. FPGA designs have better project cycle.

Tools
Tools which are used for FPGA designs are relatively cheaper than ASIC designs.

Re-Usability
A single FPGA can be used for various applications, by simply reprogramming it (dumping new HDL code). By definition ASIC are application specific cannot be reused.

Field-Programmable Gate Array

A Field-Programmable Gate Array (FPGA) is a semiconductor device containing programmable logic components called "logic blocks", and programmable interconnects. Logic blocks can be programmed to perform the function of basic logic gates such as AND, and XOR, or more complex combinational functions such as decoders or mathematical functions. In most FPGAs, the logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.

Applications

  • ASIC prototyping: Due to high cost of ASIC chips, the logic of the application is first verified by dumping HDL code in a FPGA. This helps for faster and cheaper testing. Once the logic is verified then they are made into ASICs.
  • Very useful in applications that can make use of the massive parallelism offered by their architecture. Example: code breaking, in particular brute-force attack, of cryptographic algorithms.
  • FPGAs are sued for computational kernels such as FFT or Convolution instead of a microprocessor.
  • Applications include digital signal processing, software-defined radio, aerospace and defense systems, medical imaging, computer vision, speech recognition, cryptography, bio-informatics, computer hardware emulation and a growing range of other areas.
Architecture

FPGA consists of large number of "configurable logic blocks" (CLBs) and routing channels. Multiple I/O pads may fit into the height of one row or the width of one column in the array. In general all the routing channels have the same width. The block diagram of FPGA architecture is shown below.


CLB: The CLB consists of an n-bit look-up table (LUT), a flip-flop and a 2x1 mux. The value n is manufacturer specific. Increase in n value can increase the performance of a FPGA. Typically n is 4. An n-bit lookup table can be implemented with a multiplexer whose select lines are the inputs of the LUT and whose inputs are constants. An n-bit LUT can encode any n-input Boolean function by modeling such functions as truth tables. This is an efficient way of encoding Boolean logic functions, and LUTs with 4-6 bits of input are in fact the key component of modern FPGAs. The block diagram of a CLB is shown below.


Each CLB has n-inputs and only one input, which can be either the registered or the unregistered LUT output. The output is selected using a 2x1 mux. The LUT output is registered using the flip-flop (generally D flip-flop). The clock is given to the flip-flop, using which the output is registered. In general, high fanout signals like clock signals are routed via special-purpose dedicated routing networks, they and other signals are managed separately.

Routing channels are programmed to connect various CLBs. The connecting done according to the design. The CLBs are connected in such a way that logic of the design is achieved.

FPGA Programming

The design is first coded in HDL (Verilog or VHDL), once the code is validated (simulated and synthesized). During synthesis, typically done using tools like Xilinx ISE, FPGA Advantage, etc, a technology-mapped net list is generated. The net list can then be fitted to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification methodologies. Once the design and validation process is complete, the binary file generated is used to (re)configure the FPGA. Once the FPGA is (re)configured, it is tested. If there are any issues or modifications, the original HDL code will be modified and then entire process is repeated, and FPGA is reconfigured.

Random Access Memory

Random Access Memory (RAM) is a type of computer data storage. Its mainly used as main memory of a computer. RAM allows to access the data in any order, i.e random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data. You can access any memory cell directly if you know the row and column that intersect at that cell.
    Most of the RAM chips are volatile types of memory, where the information is lost after the power is switched off. There are some non-volatile types such as, ROM, NOR-Flash.

SRAM: Static Random Access Memory
SRAM is static, which doesn't need to be periodically refreshed, as SRAM uses bistable latching circuitry to store each bit. SRAM is volatile memory. Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. A typical SRAM uses six MOSFETs to store each memory bit.
    As SRAM doesnt need to be refreshed, it is faster than other types, but as each cell uses at least 6 transistors it is also very expensive. So in general SRAM is used for faster access memory units of a CPU.

DRAM: Dynamic Random Access Memory
In a DRAM, a transistor and a capacitor are paired to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information. The transistor acts as a switch that lets the control circuitry on the memory chip read the capacitor or change its state. As capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh process, it is a dynamic memory.
    The advantage of DRAM is its structure simplicity. As it requires only one transistor and one capacitor per one bit, high density can be achieved. Hence DRAM is cheaper and slower, when compared to SRAM.

Other types of RAM

FPM DRAM: Fast page mode dynamic random access memory was the original form of DRAM. It waits through the entire process of locating a bit of data by column and row and then reading the bit before it starts on the next bit.

EDO DRAM: Extended data-out dynamic random access memory does not wait for all of the processing of the first bit before continuing to the next one. As soon as the address of the first bit is located, EDO DRAM begins looking for the next bit. It is about five percent faster than FPM.

SDRAM: Synchronous dynamic random access memory takes advantage of the burst mode concept to greatly improve performance. It does this by staying on the row containing the requested bit and moving rapidly through the columns, reading each bit as it goes. The idea is that most of the time the data needed by the CPU will be in sequence. SDRAM is about five percent faster than EDO RAM and is the most common form in desktops today.

DDR SDRAM: Double data rate synchronous dynamic RAM is just like SDRAM except that is has higher bandwidth, meaning greater speed.

DDR2 SDRAM: Double data rate two synchronous dynamic RAM. Its primary benefit is the ability to operate the external data bus twice as fast as DDR SDRAM. This is achieved by improved bus signaling, and by operating the memory cells at half the clock rate (one quarter of the data transfer rate), rather than at the clock rate as in the original DDR SRAM.

Direct Memory Access

Direct memory access (DMA) is a feature of modern computers that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel.

Principle of DMA

DMA is an essential feature of all modern computers, as it allows devices to transfer data without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy each piece of data from the source to the destination. This is typically slower than copying normal blocks of memory since access to I/O devices over a peripheral bus is generally slower than normal system RAM. During this time the CPU would be unavailable for any other tasks involving CPU bus access, although it could continue doing any work which did not require bus access.

A DMA transfer essentially copies a block of memory from one device to another. While the CPU initiates the transfer, it does not execute it. For so-called "third party" DMA, as is normally used with the ISA bus, the transfer is performed by a DMA controller which is typically part of the motherboard chipset. More advanced bus designs such as PCI typically use bus mastering DMA, where the device takes control of the bus and performs the transfer itself.

A typical usage of DMA is copying a block of memory from system RAM to or from a buffer on the device. Such an operation does not stall the processor, which as a result can be scheduled to perform other tasks. DMA is essential to high performance embedded systems. It is also essential in providing so-called zero-copy implementations of peripheral device drivers as well as functionalities such as network packet routing, audio playback and streaming video.

DMA Controller

The processing unit which controls the DMA process is known as DMA controller. Typically the job of the DMA controller is to setup a connection between the memory unit and the IO device, with the permission from the microprocessor, so that the data can be transferred with much less processor overhead. The following figure shows a simple example of hardware interface of a DMA controller in a microprocessor based system.


Functioning (Follow the timing diagram for better understanding).
Whenever there is a IO request (IOREQ) for memory access from a IO device. The DMA controller sends a Halt signal to microprocessor. Generally halt signal (HALT) is active low. Microprocessor then acknowledges the DMA controller with a bus availability signal (BA). As soon as BA is available, then DMA controller sends an IO acknowledgment to IO device (IOACK) and chip enable (CE - active low) to the memory unit. The read/write control (R/W) signal will be give by the IO device to memory unit. Then the data transfer will begin. When the data transfer is finished, the IO device sends an end of transfer (EOT - active low) signal. Then the DMA controller will stop halting the microprocessor. ABUS and DBUS are address bus and data bus, respectively, they are included just for general information that microprocessor, IO devices, and memory units are connected to the buses, through which data will be transferred.

Sitemap


Digital Design Interview Questions - 1

Digital Design Interview Questions - 2

Digital Design Interview Questions - 3

Digital Design Interview Questions - 4

Digital Design Interview Questions - 5

Microprocessor Interview Questions - 1

Microprocessor Interview Questions - 2

Microprocessor Interview Questions - 3

Microprocessor Interview Questions - 4

Verilog Interview Questions - 1

Verilog Interview Questions - 2

Verilog Interview Questions - 3

VLSI Interview Questions - 1

VLSI Interview Questions - 2

VLSI Interview Questions - 3

VLSI Interview Questions - 4

VLSI Interview Questions - 5





Introduction to Digital Logic Design





Introduction

Mealy and Moore FSMs

One-hot Encoding






Introduction to Verilog HDL

Basics: Lexical Tokens

Basics: Data Types

Modules

Ports

List Of Operators

Gate-Level Modeling

Dataflow Modeling

Behavioral Modeling

Tasks and Functions





The VLSI Design Flow







Complex Programmable Logic Device

Direct Memory Access

Field-Programmable Gate Array

FPGA vs ASIC

Parallel vs Serial Data Transmission

Programmable Logic Array

Random Access Memory

Setup and Hold TIme

SoC : System-On-a-Chip





Only-VLSI: Quiz



About Only-VLSI

Disclaimer

Setup and Hold TIme

Every flip-flop has restrictive time regions around the active clock edge in which input should not change. We call them restrictive because any change in the input in this regions the output may be the expected one (*see below). It may be derived from either the old input, the new input, or even in between the two. Here we define, two very important terms in the digital clocking. Setup and Hold time.

  • The setup time is the interval before the clock where the data must be held stable.
  • The hold time is the interval after the clock where the data must be held stable. Hold time can be negative, which means the data can change slightly before the clock edge and still be properly captured. Most of the current day flip-flops has zero or negative hold time.


In the above figure, the shaded region is the restricted region. The shaded region is divided into two parts by the dashed line. The left hand side part of shaded region is the setup time period and the right hand side part is the hold time period. If the data changes in this region, as shown the figure. The output may, follow the input, or many not follow the input, or may go to metastable state (where output cannot be recognized as either logic low or logic high, the entire process is known as metastability).


The above figure shows the restricted region (shaded region) for a flip-flop whose hold time is negative. The following diagram illustrates the restricted region of a D flip-flop. D is the input, Q is the output, and clock is the clock signal. If D changes in the restricted region, the flip-flop may not behave as expected, means Q is unpredictable.


To avoid setup time violations:
  • The combinational logic between the flip-flops should be optimized to get minimum delay.
  • Redesign the flip-flops to get lesser setup time.
  • Tweak launch flip-flop to have better slew at the clock pin, this will make launch flip-flop to be fast there by helping fixing setup violations.
  • Play with clock skew (useful skews).
To avoid hold time violations:
  • By adding delays (using buffers).
  • One can add lockup-latches (in cases where the hold time requirement is very huge, basically to avoid data slip).
* may be expected one: which means output is not sure, it may be the one you expect. You can also say "may not be expected one". "may" implies uncertainty. Thanks for the readers for their comments.

Parallel vs Serial Data Transmission

Parallel and serial data transmission are most widely used data transfer techniques. Parallel transfer have been the preferred way for transfer data. But with serial data transmission we can achieve high speed and with some other advantages.

In parallel transmission n bits are transfered simultaneously, hence we have to process each bit separately and line up them in an order at the receiver. Hence we have to convert parallel to serial form. This is known as overhead in parallel transmission.

Signal skewing is the another problem with parallel data transmission. In the parallel communication, n bits leave at a time, but may not be received at the receiver at the same time, some may reach late than others. To overcome this problem, receiving end has to synchronize with the transmitter and must wait until all the bits are received. The greater the skew the greater the delay, if delay is increased that effects the speed.

Another problem associated with parallel transmission is crosstalk. When n wires lie parallel to each, the signal in some particular wire may get attenuated or disturbed due the induction, cross coupling etc. As a result error grows significantly, hence extra processing is necessary at the receiver.

Serial communication is full duplex where as parallel communication is half duplex. Which means that, in serial communication we can transmit and receive signal simultaneously, where as in parallel communication we can either transmit or receive the signal. Hence serial data transfer is superior to parallel data transfer.

Practically in computers we can achieve 150MBPS data transfer using serial transmission where as with parallel we can go up to 133MBPS only.

The advantage we get using parallel data transfer is reliability. Serial data transfer is less reliable than parallel data transfer.

Digital Design Interview Questions - 4

1. Design 2 input AND, OR, and EXOR gates using 2 input NAND gate.
Answer


2. Design a circuit which doubles the frequency of a given input clock signal.
Answer

3. Implement a D-latch using 2x1 multiplexer(s).
Answer

4. Give the excitation table of a JK flip-flop.
Answer

5. Give the Binary, Hexadecimal, BCD, and Excess-3 code for decimal 14.
Answer

6. What is race condition?
Answer

7. Give 1's and 2's complement of 19.
Answer

8. Design a 3:6 decoder.
Answer

9. If A*B=C and C*A=B then, what is the Boolean operator * ?
Answer

10. Design a 3 bit Gray Counter.
Answer

Verilog Interview Questions - 3

1. How are blocking and non-blocking statements executed?
Answer


2. How do you model a synchronous and asynchronous reset in Verilog?
Answer

3. What happens if there is connecting wires width mismatch?
Answer

4. What are different options that can be used with $display statement in Verilog?
Answer

5. Give the precedence order of the operators in Verilog.
Answer

6. Should we include all the inputs of a combinational circuit in the sensitivity list? Give reason.
Answer

7. Give 10 commonly used Verilog keywords.
Answer

8. Is it possible to optimize a Verilog code such that we can achieve low power design?
Answer

9. How does the following code work?
wire [3:0] a;
always @(*)
begin
case (1'b1)
a[0]: $display("Its a[0]");
a[1]: $display("Its a[1]");
a[2]: $display("Its a[2]");
a[3]: $display("Its a[3]");
default: $display("Its default")
endcase
end
Answer

10. Which is updated first: signal or variable?
Answer

VLSI Interview Questions - 5

This sections contains interview questions related to LOW POWER VLSI DESIGN.

1. What are the important aspects of VLSI optimization?
Answer


2. What are the sources of power dissipation?
Answer

3. What is the need for power reduction?
Answer

4. Give some low power design techniques.
Answer

5. Give a disadvantage of voltage scaling technique for power reduction.
Answer

6. Give an expression for switching power dissipation.
Answer

7. Will glitches in a logic circuit cause power wastage?
Answer

8. What is the major source of power wastage in SRAM?
Answer

9. What is the major problem associated with caches w.r.t low power design? Give techniques to overcome it.
Answer

10. Does software play any role in low power design?
Answer

Digital Design Interview Questions - 1

1. How do you convert a XOR gate into a buffer and a inverter (Use only one XOR gate for each)?
Answer


2. Implement an 2-input AND gate using a 2x1 mux.
Answer

3. What is a multiplexer?
Answer

4. What is a ring counter?
Answer

5. Compare and Contrast Synchronous and Asynchronous reset.
Answer

6. What is a Johnson counter?
Answer

7. An assembly line has 3 fail safe sensors and one emergency shutdown switch.The line should keep moving unless any of the following conditions arise:
(1) If the emergency switch is pressed
(2) If the senor1 and sensor2 are activated at the same time.
(3) If sensor 2 and sensor3 are activated at the same time.
(4) If all the sensors are activated at the same time
Suppose a combinational circuit for above case is to be implemented only with NAND Gates. How many minimum number of 2 input NAND gates are required?
Answer

8. In a 4-bit Johnson counter How many unused states are present?
Answer

9. Design a 3 input NAND gate using minimum number of 2 input NAND gates.
Answer

10. How can you convert a JK flip-flop to a D flip-flop?
Answer

 Save and Share: Digg del.icio.us Reddit Facebook Mixx Google YahooMyWeb blogmarks Blue Dot StumbleUpon Bumpzee Furl Sphinn Ma.gnolia MisterWong Propeller Simpy TwitThis Wikio BlinkList NewsVine