[FoRK] Windows Whoops
Gregory Alan Bolcer
greg at bolcer.org
Wed Jul 12 08:54:40 PDT 2017
I laughed at this article. I have had a couple razer mice over the
years. Vmware screws w/ the settings. Windows always changes the
"increase resolution" setting or whatever it's called. The Razer
drivers constantly seize up if you aren't properly configured on a usb3
versus other ports. Razer even told me my mouse pad was dirty once
instead of acknowledging their crappy drivers.
That said, once everything is loaded and privacy configured, using
Bash/Ubuntu terminal on a Win10 machine is awesome. It's better than
OSX and better than Ubuntu by itself.
On 7/11/2017 7:57 PM, Stephen D. Williams wrote:
> I've observed, over and over, how badly Windows handles creating
> processes and, generally, running smoothly. Apparently the problem is
> finishing processes. Windows 7 might not have this problem, but
> something(s) else makes process start slow. I always kind of felt like
> it was a tradition, passed down from VMS, which was painfully slow
> starting processes. Switching to Unix, in 1984, was shockingly
> refreshing and magical. Windows 7/10 vs. Linux still feels that way to
> me. Maybe this guy's Windows kernel debugging will actually make it
> much better. How could everyone, even at Microsoft, just accept this
> Nice debugger though! I'm jealous. Not jealous enough.
>> This story begins, as they so often do
>> when I noticed that my machine was behaving poorly. My Windows 10 work
>> machine has 24 cores (48 hyper-threads) and they were 50% idle. It has
>> 64 GB of RAM and that was less than half used. It has a fast SSD that
>> was mostly idle. And yet, as I moved the mouse around it kept hitching
>> – sometimes locking up for seconds at a time.
>> So I did what I always do – I grabbed an ETW trace and analyzed it.
>> The result was the discovery of a serious process-destruction
>> performance bug in Windows 10.
>> There were 5,768 context switches where /NtGdiCloseProcess/ was on the
>> /Ready Thread Stack/, each one representing a time when the critical
>> region was released. The threads readied on these call stacks had been
>> waiting a combined total of 63.3 seconds – not bad for a 1.125 second
>> period! And, if each of these readying events happened after the
>> thread had held the lock for just /200/ microseconds then the 5,768
>> readying events would be enough to account for the 1.125 second hang.
>> I’m not familiar with this part of Windows but the combination of
>> /PspExitThread/ and /NtGdiCloseProcess/ made it clear that this
>> behavior was happening during process exit.
>> This was happening during a build of Chrome, and a build of Chrome
>> creates a /lot/ of processes. I was using our distributed build system
>> which means that these processes were being created – and destroyed –
>> quite quickly.
>> The next step was to find out how much time was being spent inside of
>> /NtGdiCloseProcess/. So I moved to the /CPU Usage (Sampled)
>> table in WPA and got a butterfly graph, this time of callees of
>> /NtGdiCloseProcess/. You can see from the screen shot below that over
>> a 1.125 s period there was, across the entire system, about 1085 ms of
>> time spent inside of /NtGdiCloseProcess/, representing 96% of the wall
>> Anytime you have a lock that is held more than 95% of the time by one
>> function you are in a very bad place – especially if that same lock
>> must be acquired in order to call /GetMessage /or update the mouse
> Recently we tried to use the Windows Subsystem for Linux (WSL), but
> apparently the file system handling is so broken that it is unusable,
> even though it is pretty cool in some ways.
> FoRK mailing list
More information about the FoRK