The Static Linker

What steps to convert the program from the code to the executable file?

  • Pre-compile
    cpp hello.c > hello.i
    gcc -E hello.c -o hello.i
    
    • delete all “define” and replace all the macro definition
    • handle all condition pre-compile: #if, #ifdef,#elif,#else,#endif
    • handle #include: insert the file to the code. This is recursive.
    • delete all comments of the code
    • add the line number and the file mark. This will help compiler produces the line number used in the debug and can show the line number when compiler produces the error or warning.
    • remain all #pragma since the compiler will use it.
  • Compile
    The process of the compiler produces the assembly code with the file pre-compiled by Scanner, Parser, Grammar Parser, Semantic Parser and Optimizer.
    gcc -S hello.i -o hello.s
    

    or directly use:

    gcc -S hello.c -o hello.s
    
  • Assemble
    The Assemble can transfer the assembly language to the machine language according to reference table recorded the relation between the assembly and the machine code.
    You can use the following command to produce the object the machine can recognize:
    gcc -c hello.s -o hello.o
    
  • Link
    It produces an executable file to link all object files.

How does the compiler work?

Complier Work Process

  • The font of compile
    • Scanner and Parser
      The Parser will scan the code with “Finite State Machine” to crop that code into some tokens.
      These tokens are classed by:
      • KeyWord
      • Identification
      • Literal (numbers, string)
      • Special Symbol

      These tokens are stored in a symbol table.
      The program called lex can implement this function. It can crop the string inputted by the user into tokens according to some rules set by the users. So the parser doesn’t need to develop, the programs only change rules based on their requirement.
      Especially, The macro and the file replacement are in the pre-compile for the C programming language.

    • Grammar Parser
      The syntax tree is built by analyzing the token produced by the scanner. The process uses the Context-free Grammar The syntax tree is the tree whose node is the expression. The symbol and number are min expression, those aren’t made from other expression, are as the leaf of the whole tree. While analyzing the grammar, the priority and the meaning of a more arithmetic symbol is ensured.
      If the expression is illegal, the compiler reports the failure in the grammar parser.
      The tool called yacc is grammar parser tools. It is called as Compiler Compiler.
    • Semantic Parser
      The meaning will be explained in the semantic Parser. The semantic parser checks the semantic of statements.
      Semantic contains two aspects:

      • Static
        The semantic is ensured during compiling.
        It contains the match of the identification and the type.
      • Dynamic
        The semantic is ensured during running

      Through semantic analyzing, all nodes in the syntax tree were marked type. If the type needs auto-transformation, the semantic analyzing program will insert transforming node to the syntax tree.

    • Build the Internal Language
      The Source Code Optimizer converts the optimized code to the Intermediate Code.
      The intermediate code is close to machine code, but it doesn’t depend on any platform. For example, it doesn’t contain any size of data, the address of the variable, the name of the register.
      The Intermediate Code is as an Internal Language. It can cross any platform and make the compiler becomes two parts:
      The front – has one intermediate code
      The back – has more implementation for different platforms.
  • The back of compiler
    • Generate the machine code
      The Code Generator translates the intermediate code to the machine code. This process depends on the hardware of the certain machine. The different machine has different bytes, different registers, and different data size.
    • Target Code Optimizer
      Finally, the Target Code Optimizer optimizes the machine code generated by Code Generator.
      Then, the object code that is the machine code is linked to an executable file.

How does the linker work?

  • What kind of work does the linker complete?
    • relocating the address of the object when the code changes
    • replacing the symbol stood for the real address in the assembly language
      The symbol in the assembly language can stand for an address of a variable or a function.
    • joining all modules to generate an executable file
      An object file contained machine code called a module. It is communication problem to join all modules.
      All modules communicate each other with symbols. And joining process is called Linker
  • Static Linker
    The content of linker is to handle well all reference and makes all modules connecting perfectly.
    The process of linking is:

    • Address and Storage Allocation
    • Symbol Resolution
    • Relocation

    The linker can modify the address used as the placeholder of the symbol to the real address. This process is called Relocation. Every address used as the placeholder is called Relocation Entry

The Computer Basic Constructure

  • Computer System Design
    • Bus
      • Transfer data to certain department
      • Because the transferring speed is different, you must distinguish different bus (north bridge and south bridge)
    • CPU
      • From SMP (Symmetrical Multi-Processing) to Multi-core Processor.
  • What problem the Computer Operating System want to solve?
    • CPU
      The CPU is as the resource to Operating System. The system can allocate and control this resource. If the computer needs to do something, it will need this resource. Otherwise, the task cannot be run.
      • The Operating System handle the CPU allocation. It controls every process’s time.
      • The Operating System can force the CPU resource into the process needed. This is called Preemptive
    • Device
      • The Operating System will hide the detail of hardware. The invoker only uses the simple object to operate the various hardware.
      • The Operating System classify the various hardware and unify the operation.
    • Memory
      • The memory is the important resource. The process can be run only to get CPU and memory.
      • The memory usage should be considered as effectiveness, safe, hiding detail.

        Virtual Memory can implement these features. However, the memory only is expressed as the physical memory. The virtual memory must convert to the physical memory by the special way.

      • segmentation:

        The segmentation can solve the safe and hiding detail, but it isn’t effective. The one segmentation is for a whole process. Finding the suitable size is important for the segmentation.

      • pages:

        The page is that memory can be divided into the fix size page. The size of every page is decided by hardware. Or the selection provides by hardware, and the operation system can choose.

        As the general, the 4K is normal size for a page.

        The page can be defined as Virtual Page in the virtual memory, Physical Page in the physical memory and Disk Page in the Disk.

        If you want to share a physical page, you only let the virtual pages map to that physical page. The data in that physical page will be shared with these virtual pages.

        Safe

        A page can be set a security level only by the operating system.

        The MMU, the physical department, will convert the virtual memory provided by a user to the physical memory managed by the operating system.

    • Thread

      • The Basic of Thread:
        • Lightweight Process. The minimum unit of the program running.
        • The thread contain, TID (thread id), PC, register, heap
        • Sharing the resource of the process, code segmentation, data segmentation and process heap, file opened, signals

        The situation used thread:

        • The things needed to wait or calculate
        • The program need to run in multithread
        • The multi-thread can show the maximum power of the multi-core process
        • Sharing the data of the process

        The private resource of the thread:

        • stack
        • Thread Local Storage
        • register
          Thread Private Thread Share
          Local Variable Global Variable
          Function Argument Data in Heap
          Static Variable in Function
          Code
          File opened
      • Thread Schedule:
        • Running: The current thread is running
        • Ready: The current thread can run, but the resource of CPU already has been used.
        • Waiting: The current thread is waiting for happening an event ( I/O or async) and can’t run

          Priority Schedule:

          Deciding the order of the thread. Every thread has priority itself.

          Round Robin:

          Every Thread only own the time slice of the CPU for a while. If the time slice is over, the Thread will give up running.

          • Thread Automatic Schedule:
            IO Bound Thread
            The time it takes to complete a computation is determined principally by the period spent waiting for input/output operations to be completed.
            CPU Bound Thread
            The time for it to complete a task is determined principally by the speed of the central processor: processor utilization is high, perhaps at 100% usage for many seconds or minutes. Interrupts generated by peripherals may be processed slowly, or indefinitely delayed.

            IO Bound Thread is easy to get the high priority than CPU Bound Thread

            Starvation
            The CPU always run the high priority thread, the low priority thread never chances to run.
            For solving starvation, the low priority thread will run as long as this thread waits for a long time.
            Preemption
            When a thread consumes its time slice, it will be forced to give up running and get into ready status.

          • Linux Thread:
            Task
            In Linux, the process and the thread all is as the task. The task can share the memory. The thread
            Process : fork + exec
            Thread : clone + exec

      • The Thread Safe:

        • Competition and Atomic

          If the operation of a system isn’t atomic, it is easy to happen mistake when applying a resource competed (like memory). When facing this situation, we must use other ways to ensure the operation is atomic.

        • Lock and Async

          The lock is a way that ensure the operation must be atomic.
          Before accessing the resource, the thread should acquire a lock and release this lock after accessing.

          Binary Semaphore

          The semaphores which are restricted to the values 0 and 1 ( or locked/unlocked, unavailable/available) are called binary semaphores

          semaphore is multiple. It allows the resource is occurred N times.

          Mutex

          A mutex is essentially the same thing as a binary semaphore and sometimes uses the same basic implementation. The differences between them are in how they are used. While a binary semaphore may be used as a mutex, a mutex is a more specific use-case, which allows extra guarantees:

          1. Mutexes have a concept of an owner. Only the process that locked the mutex is supposed to unlock it. If the owner is stored by the mutex this can be verified at runtime.
          2. Mutexes may provide priority inversion safety. If the mutex knows its current owner, it is possible to promote the priority of the owner whenever a higher priority task starts waiting on the mutex.
          3. Mutexes may also provide deletion safety, where the process holding the mutex cannot be accidentally deleted.

          Critical Section
          In concurrent programming, a critical section is a part of a multi-process program that may not be concurrently executed by more than one of the program’s processes/threads; in other words, it is a piece of program that requires mutual exclusion of access. Typically, the critical section accesses a shared resource (data structure or device)

          A critical section may consist of multiple discontiguous parts of the program’s code. For example, one part of a program might read from a file that another part wishes to modify.

          How critical sections are implemented varies among operating systems.

          • The simplest method is to prevent any change of processor control inside section. On uni-processor systems, this can be done by disabling interrupts on entry into the critical section, avoiding system calls that can cause a context switch while inside the section, and restoring interrupts to their previous state on exit.Any thread of execution entering any critical section anywhere in the system will, with this implementation, prevent any other thread, including an interrupt, from being granted processing time one the CPU.
          • This brute-force approach can be improved upon by using semaphore. To enter a critical section, a thread must obtain a semaphore, which it release on leaving the section. Other threads are prevented from entering the critical section at the same time as the original thread, but are free to gain control of the CPU and execute other code, including other critical sections that are protected by different semaphores.

          Read-Write Lock
          To thread reading data more time and writing data less time, the general locks don’t have efficiency.
          The read-write lock has two modes to one lock, Shared or Exclusive:

          Lock Status Shared Exclusive
          Free Success Success
          Share Success Wait
          Exclusive Wait Wait

          This table shows another locks’ status when a lock acquires a read-write lock.
          Condition Variable
          It’s usage is like a fence.

          1. Threads can wait one condition variable.
          2. These threads waiting can be awaked by a thread with setting this condition variable.

          Making many threads waiting an event, then all threads resume to run when event happens

        • Reentrant and Thread Safe
          The routine is called reentrant if it can be interrupted in the middle of its execution and then safely called again (“re-entered”) before its previous invocations complete execution.
          The interruption could be caused by an internal action such as a jump or call, or by an external action as a hardware interrupt or signal. Once the reentered invocation completes, the previous invocations will resume correct execution.
          The condition becoming a reentrant:

          • Not using any (local) static variable or global non-static variable
          • Not returning any (local) static variable or global non-static variable
          • Only depending the argument provided by invoker
          • Not depending any special resource lock (mutex)
          • Not invoking any routine isn’t reentrant

          The reentrant is threadsafe’s strong guarantee.

        • Excessive Optimize
          volatile

          1. Preventing the compile from temporarily storing into the register for a variable to increasing the speed and not writing back.
          2. Preventing the compile from arranging the order of the operation operating volatile variable.

          barrier
          The volatile can solve wrong order caused by compile, but it doesn’t solve that wrong order cause by CPU. The CPU can dynamically change the order of the code.
          For example, in the following code, The step creating object will have the chance to change the order.

          volatile T* pInst = 0;
          T* GetInstance()
          {
            if (pInst == NULL)
            {
              lock();
              if (pInst == NULL)
                 pInst = new T;
              unlock();
            }
           return pInst;
          }
          

          Two if statement can invoking work load of lock descreases to smaller.

          Now, the barrier isn’t cross platform. Every CPU have itself implementation.
          It likes a fence prevent the mechanism changed the order in CPU working.

          volatile T* pInst = 0;
          T* GetInstance()
          {
            if (pInst == NULL)
            {
              lock();
              if (!pInst)
              {
                 T* temp = new T;
                 barrier();
                 pInst = temp;
              }
              unlock();
            }
           return pInst;
          }
          
      • The Model of the multi-thread:
        In the operation system, the thread in the kernel is different the thread in the user space.
        For that, the thread in the kernel has three kinds of relation with the thread in the user space.
        • One Vs. One
          One thread in the kernel is associated with one thread in the user space.
          Disadvantage:
          • The number of kernel thread has limitation, so the user space thread also has limitation
          • The changing context between kernel threads has more cost.
        • One Vs. Multiple
          One thread in the kernel is associated with multiple threads in the user space.
          This situation decreases the cost switched the context between kernel threads.
          Disadvantage:
          • If the thread in the user space is blocked, it will block all user space threads in that kernel thread.
          • This situation doesn’t outstanding increase the performance.
        • Multiple Vs. Multiple
          Multiple threads in the user space are associated less multiple threads in the kernel
          This can solve two below situation’s problem. But It’s performance isn’t better than One Vs. One

Understanding Pointers (for beginners)

Variable definition:

variable

The symbol table is used to get the actually address of variable name.

Pointer Variable is:

pointer variable

The pointer type need two elements, name and type. The name of point represents the address of variable.By the name, we can find the other address (rvalue).

The type of point represents the length that the point move each time. The type is different, the step that point adding 1 indicate is different.

Pointer with Array:

pointer array

The Array essentially is the area of memory. It has continuous part of memory and every part’s type is same.

The pointer indicates the start position of array. When adding 1, the pointer add the length of type.

The address of the array is constant, we don’t change the address of array.

Pointer with string:

pointer string

//The compiler would count the characters, 
// leave room for the nul character and store the total of the four characters in memory
//Strong in the static memory block are taken up
// my_name is a constant
char my_name[] = "Ted";

//Pointer address is stored in stack
// my_name is a variable
char *my_name = "Ted";

void funcA(char *ptr){
    // a is in the stack, copy data from the data space
    // data in the stack
    // By making a[] static, forcing the complier to place
    // data to the stack
    char a[] = "ABCDE";
}

void funcB(char* ptr){
    //pointer to the address of the data in the data space
    // data still in the data space
    char *cp = "ABCDE";
}

Pointer with structure:

pointer with structure

The structure also is continuous memory, but it doesn’t represent the unique type. It encompass the mix of the many basic types.

The pointer only indicates the start of the structure, you can find the member by the operation (->) or you can move the pointer with the length of the type that member is.

Pointer with the Arrays of Arrays: