linux2.4之前的内核有进程最大数的限制,受限制的原因是,每一个进程都有自已的TSS和LDT,而TSS(任务描述符)和LDT(私有描述符)必须放在GDT中,GDT最大只能存放8192个描述符,除掉系统用掉的12描述符之外,最大进程数=(8192-12)/2, 总共4090个进程。从Linux2.4以后,全部进程使用同一个TSS,准确的说是,每个CPU一个TSS,在同一个CPU上的进程使用同一个TSS。TSS的定义在asm-i386/processer.h中,定义如下:
extern struct tss_struct init_tss[NR_CPUS];
在start_kernel()->trap_init()->cpu_init()初始化并加载TSS:
void __init cpu_init (void){int nr = smp_processor_id(); //获取当前cpu
struct tss_struct * t = &init_tss[nr]; //当前cpu使用的tss
t->esp0 = current->thread.esp0; //把TSS中esp0更新为当前进程的esp0set_tss_desc(nr,t);gdt_table[__TSS(nr)].b &= 0xfffffdff;load_TR(nr); //加载TSSload_LDT(&init_mm.context); //加载LDT
}
我们知道,任务切换(硬切换)需要用到TSS来保存全部寄存器(2.4以前使用jmp来实现切换),
中断发生时也需要从TSS中读取ring0的esp0,那么,进程使用相同的TSS,任务切换怎么办?
其实2.4以后不再使用硬切换,而是使用软切换,寄存器不再保存在TSS中了,而是保存在
task->thread中,只用TSS的esp0和IO许可位图,所以,在进程切换过程中,只需要更新TSS中
的esp0、io bitmap,代码在sched.c中:
schedule()->switch_to()->__switch_to(),
void fastcall __switch_to(struct task_struct *prev_p, struct task_struct *next_p){struct thread_struct *prev = &prev_p->thread, *next = &next_p->thread;struct tss_struct *tss = init_tss + smp_processor_id(); //当前cpu的TSS
/** Reload esp0, LDT and the page table pointer:*/ttss->esp0 = next->esp0; //用下一个进程的esp0更新tss->esp0
//拷贝下一个进程的io_bitmap到tss->io_bitmap
if (prev->ioperm || next->ioperm) { if (next->ioperm) { /* * 4 cachelines copy ... not good, but not that * bad either. Anyone got something better? * This only affects processes which use ioperm(). * [Putting the TSSs into 4k-tlb mapped regions * and playing VM tricks to switch the IO bitmap * is not really acceptable.] */ memcpy(tss->io_bitmap, next->io_bitmap, IO_BITMAP_BYTES); tss->bitmap = IO_BITMAP_OFFSET; } else /* * a bitmap offset pointing outside of the TSS limit * causes a nicely controllable SIGSEGV if a process * tries to use a port IO instruction. The first * sys_ioperm() call sets up the bitmap properly. */ tss->bitmap = INVALID_IO_BITMAP_OFFSET;}}
====