http://weltall.heliohost.org/wordpress/2011/05/14/running-vmware-workstation-player-on-linux-2-6-39-updated/


PDATE5: As it seems many are using still the 7.1.x series of vmware. As vmware didn’t update the kernel sources in there but only for 8.0.0 (I wonder why… maybe marketing reasons), as some have pointed out in the comments, in order to make these patches work you just need to change the version of vmware stated in the patch script to 7.1.5 instead of 7.1.4. Same goes for the player from 3.1.4 to 3.1.5.

UPDATE4: A fix for guest kernel modules is available here

UPDATE3: It seems the patchv3 had a little bug, which prevented, at least,  building on linux 3.0.0-rc1, I’ve replaced the file so if you’ve downloaded the archive before you’ve seen this update please get it again (even more if you had issues with it)

UPDATE2: I’ve added the script which was frequently used for vmware patches in the past (had also to fix it in order to make it handle vmblock) and I’ve also updated the patch to pass the correct variable to kern_path function. vmware2.6.39patchv3.tar

This is an update to the previous post fixing some issues with the patch in the vmblock module (so it doesn’t crash anymore and it initializes properly) and should support building also on previous kernels.

Even though VMware updated their packages to support up to 2.6.38 kernel series the support for linux is still quite lacking as they forgot entirely to support 2.6.39 which was just released.

So, as it seems no one bothered yet according to google and vmware community, in order to overcome this problem I had to put my hands at sources in order to make the modules build and run under this new version of the kernel. So little by little I’ve removed many of remaining things needing the BLK or which used wrongly SPINLOCK defines (we are talking of defines which were discouraged use since some years according to kernel documentation floating around).

The process was quite successful and I could run the VM well.

Attached Files:

  • patch vmware patch for linux 2.6.39 (v2)

    Perfectly working patch for vmware in order to use it with the linux 2.6.39 kernel

  • vmware2.6.39patchv3.tar

    This patch fixes vmware products module kernels in order to run on linux 2.6.39. Changelog from the second patch: -fixed a wrong varriable being passed to kern_path -added a script based on Artem S. Tashkinov's work (it needed to be fixed in order to patch also vmblock)


'' 카테고리의 다른 글

VMWARE workstation 7.1.4 patch for kerenel 3.1  (0) 2013.06.28
NX Client 속도 향상  (0) 2013.04.27
[명령어] 특정유저 프로세스 강제 종료 명령어  (0) 2013.04.08
NX 단축키  (0) 2013.04.05
Eclipse 속도 향상 팁  (0) 2013.03.06
Posted by 라판
TAG vmware

screen

툴/일반 2013.06.14 16:08


http://aperiodic.net/screen/quick_reference

- 스크린에 대한 짤막한 레퍼런스 

screen Quick Reference

Getting in

start a new screen session with session namescreen -S <name>
list running sessions/screensscreen -ls
attach to a running sessionscreen -r
… to session with namescreen -r <name>
the “ultimate attach”screen -dRR (Attaches to a screen session. If the session is attached elsewhere, detaches that other display. If no session exists, creates one. If multiple sessions exist, uses the first one.)
detach a running sessionscreen -d <name>

Escape key

All screen commands are prefixed by an escape key, by default C-a (that's Control-a, sometimes written ^a). To send a literal C-a to the programs in screen, use C-a a.

Getting out

detachC-a d
detach and logout (quick exit)C-a D D
exit screen“C-a : quit” or exit all of the programs in screen.
force-exit screenC-a C-\ (not recommended)

Help

See helpC-a ? (lists keybindings)

The man page is the complete reference, but it's very long.

Window Management

create new windowC-a c
change to last-visited active windowC-a C-a (commonly used to flip-flop between two windows)
change to window by numberC-a <number> (only for windows 0 to 9)
change to window by number or nameC-a ' <number or title>
change to next window in listC-a n or C-a <space>
change to previous window in listC-a p or C-a <backspace>
see window listC-a " (allows you to select a window to change to)
show window barC-a w (if you don't have window bar)
close current windowClose all applications in the current window (including shell)
kill current windowC-a k (not recommended)
kill all windowsC-a \ (not recommended)
rename current windowC-a A

Split screen

split display horizontallyC-a S
split display verticallyC-a | or C-a V (for the vanilla vertical screen patch)
jump to next display regionC-a tab
remove current regionC-a X
remove all regions but the current oneC-a Q

Scripting

send a command to a named sessionscreen -S <name> -X <command>
create a new window and run ping example.comscreen -S <name> -X screen ping example.com
stuff characters into the input buffer
using bash to expand a newline character
(from here )
screen -S <name> [-p <page>] -X stuff $'quit\r'
a full example
# run bash within screen
screen -AmdS bash_shell bash
# run top within that bash session
screen -S bash_shell -p 0 -X stuff $'top\r'
 
# ... some time later
 
# stuff 'q' to tell top to quit
screen -S bash_shell -X stuff 'q'
# stuff 'exit\n' to exit bash session
screen -S bash_shell -X stuff $'exit\r'

Misc

redraw windowC-a C-l
enter copy modeC-a [ or C-a <esc> (also used for viewing scrollback buffer)
pasteC-a ]
monitor window for activityC-a M
monitor window for silenceC-a _
enter digraph (for producing non-ASCII characters)C-a C-v
lock (password protect) displayC-a x
enter screen commandC-a :
quick_reference.txt · Last modified: 2013/05/31 16:24 by 216.46.98.249 · Currently locked by: 15.195.185



' > 일반' 카테고리의 다른 글

screen  (0) 2013.06.14
Launchy  (0) 2013.04.11
FreeNX : VNC을 대신할 RemoteDesktop (번외 eclipse 속도 tip)  (0) 2012.11.08
tightvnc에서 텍스트 복사/붙여놓기  (0) 2012.11.02
Posted by 라판
TAG screen


http://www.alexonlinux.com/pthread-spinlocks


Continuing my previous post, I would like to talk about relatively new feature in glibc and pthreads in particular. I am talking about spinlocks.

Quiet often you want to protect some simple data structures from simultaneous access by two or more threads. As in my previous post, this can even be a simple integer. Using mutexes and semaphores (critical sections if you wish) to protect simple integer is an overkill and here’s why.

Modifying or reading value of a single integer requires quiet often as few as two or three CPU instructions. On the other hand, acquiring semaphore is a huge operation. It involves at least one system call translated into thousands of CPU instructions. Same when releasing the semaphore.

Things are a little better with mutexes, but still far from being perfect. Posix mutexes in Linux implemented using futexes. Futex stands for Fast-Usermode-muTEX. The idea behind futex is not to do system call when locking unlocked futex. Waiting for locked futex would still require system call because this is how processes wait for something in Linux. Yet locking unlocked futex can be done without asking kernel to do things for you. Therefore, locking futex is, at least in some cases, is very fast.

The problem is that in rest of the cases mutex in Linux is slow as semaphore. And as with semaphores, spending thousands of CPU cycles just to protect a single integer is definitely an overkill.

This is exactly the problem spinlocks solve. Spinlock is another synchronization mechanism. It works in the same manner as mutexes. I.e. only one thread can have it locked at the same time. However there’s a difference.

When thread tries to lock locked spinlock, it won’t sleep waiting for the spinlock to get unlocked. It will do busy wait, i.e. spinning in a while loop. This is why its calledspinlock.

Locking spinlock takes only tens of CPU cycles. One important thing with spinlocks is to hold it for short period of time. Don’t be surprised when your program will begin consuming too much CPU, all because one of your threads holds some spinlock for too long. To avoid this, try to avoid executing big chunks of code while holding a spinlock. And by all means avoid doing I/O while holding spinlock.

Now for the easy part.

First, to make things work, you have to include pthread.h. pthread_spin_init() initializes the spinlock. pthread_spin_lock() locks one and pthread_spin_unlock() unlocks it. All as with pthread mutexes. And there are manual pages for each and every one of them of course.


'프로그래밍언어' 카테고리의 다른 글

Spinlock에 대한 좋은 글과 반론들  (0) 2013.06.14
C/C++ reference counting with atomic variables and gcc  (0) 2013.06.14
쉘 프로그래밍  (0) 2013.05.27
JAVA GC에 대한 좋은 글들  (0) 2013.05.02
mutable  (0) 2013.03.26
__attribute__ (packed) : 구조체 정렬  (0) 2013.03.20
Posted by 라판

Atomic variable를 사용하게 되는 이유와, RCU approcah에 대한 좋은 아티클. 이해하기 쉽제 잘 설명되어 있다. 


C/C++ reference counting with atomic variables and gcc


Table of contents

IntroductionBACK TO TOC

Lets say we have a data structure that manages objects and we would like to manipulate the data structure and the objects from two or more threads. To achieve maximum performance we have to distinguish between mechanism that we use to protect the data structure itself, and the mechanism that we use to protect actual objects.

What reference counting needed for?BACK TO TOC

To explain why two mechanisms provide best performance and atomic variables aid us, consider following scenario.

First thread (the manipulator) has to find a particular object in the data structure and change it. Second thread (the eraser) deletes expired objects. From time to time, first thread tries to manipulate an object that second thread tries to delete. This is a classical scenario that can be found in almost any multithreaded program.

Obviously two threads have to work with some sort of mutual exclusion mechanism. Lets say we protect entire data structure with a single mutex. In this case, manipulator thread has to hold the mutex locked for as long as it takes it to find an object and to manipulate it. This means that eraser thread won’t be able to access the data structure for as long as manipulator thread works. If all manipulator thread does is searching for entries and manipulates them, then eraser thread just won’t be able to access the data structure.

Depending on circumstances, this can be quiet devastating for the system. And, as I stated earlier, the only way to solve this problem is to split between mechanisms that protect the data structure and actual objects.

If we do so, manipulator thread will hold the mutex only when it searches for the object. Once it has the object it can release the mutex that protects the data structure, allowing eraser thread to do its job.

But then how can we make sure that the eraser thread does not delete the very same object that first thread modifies?

We could use another mutex to protect contents of actual object. There is one important question that we have to ask ourselves here. How many mutexes we need to protect object’s contents?

If we use one mutex for all objects, then we run into the same bottle-neck that we’ve just avoided. If, on the other hand, we use mutex per object, then we run into slightly different trouble.

Assuming we’ve created a mutex per object. How do we manage mutexes for objects? We can put a mutex into the object itself, but this will create another problem. If eraser thread decides to delete the object, but a moment before it does, manipulator thread decides to check the object and tries to lock its mutex. Since the mutex has already been taken by eraser thread, manipulator will fall asleep. Yet right after manipulator falls asleep, eraser will delete the object and its mutex, leaving manipulator sleeping on non-existing object. Ouch.

There are couple of things that we can do actually. One of them is implementing reference counting on objects. Yet we have to protect the reference counter itself. Luckily thanks to atomic variables we don’t need a mutex to protect the counter.

This is how we will use atomic variables to count references to objectsBACK TO TOC

First, we initialize our object’s reference counter to 1. When manipulator thread begins using it, it should increase the reference counter by one. When it has finished with the object, it should decrease the reference counter. I.e. as long as manipulator uses the object it should keep the reference counter incremented.

Once eraser thread decides to delete certain object, it should first delete the object from the data structure. It should hold the mutex that protects the data structure to do that. Next it has to decrease object’s reference counter by one.

Now remember that we’ve initialized the reference counter to 1. So if the object is not in use, its reference counter should become zero. If its zero, we can safely delete the object. This is because since the object is no longer in the data structure, we can be sure that first thread won’t use it anymore.

If its reference counter is bigger than one. In this case we should wait until threads reference counter becomes zero. But, how?

Answering this question can be quiet tricky. I usually consider one of two approaches. The naive or an approach I call the RCU approach.

The naive approachBACK TO TOC

The naive approach goes like this. We create a list of objects that we want to delete. Every time our eraser thread wakes up, it runs through the list and deletes all object whose reference counter is zero.

Depending on circumstances, there might be a problem with this solution. The list of objects that we want to delete may grow and running through entire list can hog the CPU. Depending on how often we delete objects this can be good enough solution, but if its not, we have to consider RCU approach.

The RCU approachBACK TO TOC

RCU is another synchronization mechanism. Its being used primary in Linux kernel. You can read about it here. I call this approach the RCU approach, because it has some similarity to how RCU, the synchronization mechanism, works.

The idea is to assume that it takes manipulator thread some time to work with the object. I.e. manipulator keeps using the object for certain period of time at most. After that period of time, the object should no longer be in use and eraser thread should be able to delete it.

Lets say that it takes manipulator thread to do its manipulations 1 minute at most. I am exaggerating on purpose here. When eraser thread tries to delete the object, all it has to do is to grab a mutex or a spinlock that protects the data structure that contains the objects and remove the object from the data structure.

Next it has to get current time and save it in the object. Object should have methods that allows it to remember its deletion time. Then eraser thread has to append it to the end of deleted objects list. This is a special list that contains all objects that we want to delete – just as in our naive approach.

list

Note that appending objects to the end of the list, makes it sorted by the time when we’ve deleted objects. Earlier objects in the list are those with earlier deletion time.

Next, eraser thread has to return to the beginning of the list, grab an object and check if one minute has passed since it was removed from the data structure. If so, it should check its reference counter and make sure that its indeed 0. If not, it should update its deletion time to current time and append it to the end of the list. Note that normally, this shouldn’t happen. If its reference counter is 0, then the thread should remove the object from the list and delete it.

To demonstrate you it works, take a look at the figure above. Lets say the time now is 15:35:12. This is more than one minute after we’ve deleted the first object. Therefore, once eraser wakes up and checks the list, it will instantly see that the first object in the list has been deleted more than a minute ago. This is the time for it to delete the object from the list and check the next object.

It will check second object, see that its in the list less than one minute and will keep it in the list. Now, here is the interesting property of this approach. Eraser thread doesn’t have to check the rest of the list. Because we always append objects to the end of the list, the list is sorted and eraser thread can be sure that the rest of the objects shouldn’t be deleted neither.

So instead of passing through entire list, as we did in naive approach, we can use timestamps to build a sorted list of objects and check only few objects at the beginning of the list. Depending on the circumstances, this will save huge amount processing time.

Where atomic variables coming from?BACK TO TOC

Starting version 4.1.2, gcc has built-in support for atomic variables. They work on majority of architectures, but before using them, please check if your architecture supports them.

There’s a set of functions that manipulate atomic variables.

1type __sync_fetch_and_add (type *ptr, type value);
2type __sync_fetch_and_sub (type *ptr, type value);
3type __sync_fetch_and_or (type *ptr, type value);
4type __sync_fetch_and_and (type *ptr, type value);
5type __sync_fetch_and_xor (type *ptr, type value);
6type __sync_fetch_and_nand (type *ptr, type value);
1type __sync_add_and_fetch (type *ptr, type value);
2type __sync_sub_and_fetch (type *ptr, type value);
3type __sync_or_and_fetch (type *ptr, type value);
4type __sync_and_and_fetch (type *ptr, type value);
5type __sync_xor_and_fetch (type *ptr, type value);
6type __sync_nand_and_fetch (type *ptr, type value);

There’s no need to include any header file to use these functions. If your architecture doesn’t support them, you will receive a linker error as if you were calling a non-existing function.

Each one of these functions manipulates a variable of certain typetype can be either char, short, int, long, long long or their unsigned counterparts.

It is a good idea to wrap these functions with something that hides implementation details. This can be class that represents atomic variable or a  abstract data type.

Finally, I have a nice example of how to use atomic variables in multithreaded simple data type access and atomic variables article.

ConclusionBACK TO TOC

I hope I’ve found this interesting. In case you have further questions, please send me emails to alex@alexonlinux.com

Did you know that you can receive periodical updates with the latest articles that I write right into your email box? Alternatively, you subscribe to the RSS feed!

Want to know how? Check out 
Subscribe page


'프로그래밍언어' 카테고리의 다른 글

Spinlock에 대한 좋은 글과 반론들  (0) 2013.06.14
C/C++ reference counting with atomic variables and gcc  (0) 2013.06.14
쉘 프로그래밍  (0) 2013.05.27
JAVA GC에 대한 좋은 글들  (0) 2013.05.02
mutable  (0) 2013.03.26
__attribute__ (packed) : 구조체 정렬  (0) 2013.03.20
Posted by 라판