Some Nerdy Stuff

April 8, 2010

Why are SSD’s so expensive?

Filed under: Uncategorized — aaronls @ 10:55 am

Basically a SSD is just a bunch of flash chips, with each chip providing a specific amount of storage space. Larger SSDs require more of these chips. These flash memory chips are built using 32nm and 45nm manufacturing processes on silicon, similar to CPUs. However, flash memory isn’t as complicated as a CPU and there are things like multi-level cell storage that flash uses to fit more data into a smaller space, so the cost isn’t quite as much as a CPU. Still, it takes several flash chips to provide the storage for an SSD. Additionally SSDs have additional components that add to the cost. For example chips that handle the flow of data to and from the flash memory. Some SSDs have extra flash storage that you don’t know about, because it is there on standby in case some of the flash memory goes bad and can replace the bad memory, so that as a user of the drive you don’t know something has gone wrong.  The process of making these chips is expensive.  Even if a single chip can hold 16 GB of data, it will take 4 of these to make a 64 GB SSD, and if you use the price of a really cheap CPU as a benchmark, then you are talking about $30 at least per chip.  So that’s $120 just for the flash memory on the drive and that doesn’t include the extra components that comprise the drive.

So it is a matter of the process of making flash memory is expensive. The other big factor is storage density. HDD manufacturers have managed to continually evolve hard drives such that they can fit more data in the same space, and thus similar manufacturing processes can produce more storage. For flash memory chips to get cheaper, manufacturers have to figure out how they can use the same/similar manufacturing processes but instead fit more storage space into the same flash memory chip. Increasing the storage density of the chip will make the cost per gigabyte of storage lower, since you are producing a chip with more storage using the same manufacturing process. So your manufacturing costs don’t go up significantly, but the amount of storage you’re producing does. The issue here is that increasing the storage density on silicon involves significant technology challenges to make the components of the chip smaller. The 45 nm and 32 nm processes currently used are named for the size of features on the chip, and making these features smaller so that more can be fit on the chip is a really difficult challenge.

Some recent research may improve storage density significantly in the next three years:

Another article explaining SSD cost in greater detail:


April 5, 2010

Potential Cost Savings of SSDs

Filed under: Uncategorized — aaronls @ 9:36 am

Solid state disks (SSDs) are currently very expensive in terms of price per gigabyte, at $1.95/GB, when compared to hard disk drives (HDDs) at around $0.08/GB for a terabyte HDD, $0.18/GB for a 250GB HDD, or if you wanted a 146GB 2.5″ 15K RPM server drive(henceforth enterprise HDD) it’d run you a whopping $3.63/GB.  So already the SSD is looking great when compared to a high end server drive.  However, for the enthusiast/professional/gamer market, where a high performance drive is mouth watering, but the price of a SSD is not, we should take into consideration the fact that an SSD’s cost will be offset to some extent by it’s power and cooling costs savings.  An SSD uses a small fraction of the power that an HDD does, and this also translates to a significant decrease in heat dissipation.  This means lower power costs, lower cooling costs, and lower cost for infrastructure to support cooling.  Additionally, SSDs are touted as being significantly more reliable than HDDs.

What I want to do is calculate the cost of a mainstream HDD over a 5 year period that includes it’s cost to power, cool, and replace(in the event of a failure) the drive.  We will do this by taking it’s purchase price, adding to that the cost to power it, adding the estimated cost to cool it, and also consider the average failure rate and use that to estimate average replacement costs.  We might even add a premium on top of the replacement cost to account for the overhead of swapping the replacement.

Then we will calculate the same cost for an SSD of similar size, and we will do the same for a enterprise drive.  When we are done we can calculate a $/GB that represents a total cost of ownership.  To do this accurately, I feel we first need to target a specific drive size.  The sizes of SSDs and enterprise HDDs don’t come close to what is available in mainstream sizes, and since we are considering some fairly pricey drives, we probably can’t afford to splurge on massive amounts of storage.  Let’s say around 128 GB is the space we anticipate needing for most of our tasks.  With that space, we can install video/audio production tools, development tools, and/or several games and still have a significant amount of working space for the data we would produce with those tools.  Additionally, the 128 GB SSDs seem to be a sweet spot on the $/GB for SSDs and enterprise HDDs.  We could try to save some money by going for a 64 GB drive, but the $/GB is higher for both SSDs, mainstream HDDs, and enterprise HDDs when we start going for smaller drives.

First let’s spec out the stats of our drives in terms of power consumption.  I’m going off of’s benchmark database for the HDD estimates, and for SSDs I am going to use some benchmarking articles that target a specific SSD in our 128 GB $250 price point category.

Estimated Active Consumption (Watts):

Mainstream HDDs = 10 W

Enterprise HDDs = 18 W

SSD = 5 W

Estimated Idle Consumption (Watts):

Mainstream HDDs = 7 W

Enterprise HDDs = 14 W

SSD = .7 W

Let’s assume that since we are considering a costly SSD, that we are considering putting the drive to use in a way that makes it worth the hefty price tag.  So I will assume 4 hours a day of active usage, and 20 hours of idle usage.  This assumes it will be on 24/7, and I think that is a fair balance since there will be scenarios where the drive has more active usage, and at the other end of the spectrum, scenarios where the drive will be idle all day or the system may be powered down.  Based on these assumptions, let’s calculate the daily watt/hour consumption of each drive:

Mainstream HDDs = 140 Wh idle per day + 40 Wh active per day = 180 Wh

Enterprise HDDs = 280 Wh idle per day + 72 Wh active per day = 352 Wh per day

SSD = 1.4 Wh idle per day + 20 Wh active per day = 21.4 Wh per day

To roughly estimate cooling cost we will assume the drive dissipates the same amount of heat energy that it consumes in electricity, and then also assume that it costs that amount of energy to provide air conditioning that removes the generated heat.  The result is that cooling cost is equal to the power consumption cost.  So we can essentially just double the power consumption calculated above to give us a combined consumption for power and cooling.  These assumptions are not necessarily accurate, but they provide a fair estimate.  It is likely that some energy supplied to the drive is not converted to heat, making the actual heat dissipation lower than we calculated, but this is offset by the fact that it is likely that the inefficiencies of air conditioners means it will cost more to cool that energy than we estimate.  My guess is that in real life the cost of cooling will be much higher than our estimates.

So now we will calculate a 5 year total cost of power and cooling, and assume that the cost of electricity is $0.13 per kilowatt:

Mainstream HDDs = 180 Wh/day x 365 days/year x 5 years x $.13/kW x 2 = $85.41/5 years

Enterprise HDDs = 352 Wh/day x 365 days/year x 5 years x $.13/kW x 2 = $167.02/5 years

SSDs = 21.4 Wh per day x 365 days/year x 5 years x $.13/kW x 2 = $10.15/5 years

Now we will also estimate the average failure rate (AFR) over a 5 year period.  For the HDDs we will use statistics from Google’s white paper on HDD reliability:

Since SSDs haven’t been in wide usage as long or as much as HDDs, I doubt there are similar analysis available on the reliability of SSDs.  However, we can use manufacturer’s claimed AFRs as a guideline.  We can’t take the manufacturer’s reported AFR for face value however, as the above report and many others show that manufacturers can’t be trusted to report accurate AFR’s as HDDs have been shown to fail more often than manufacturers would like you to believe.  Seagate claims a 0.44% AFR on a particular SSD model, 0.55% AFR on a particular 15K RPM enterprise HDD, and 0.73% AFR on a particular 7200 RPM enterprise drive .  While these claims are likely inaccurate, we can hope that Seagate has kept them relative to each other in that one can estimate that SSDs have a one to two fifths lower AFR.  I couldn’t find any AFR claims for mainstream HDDs from Seagate for comparison, but one would expect the AFRs for these are slightly lower, so we will assume SSDs are 2/5ths more reliable than HDDs.

Based on Google’s yearly AFR statistics one can calculate the 5 year survival rate of an HDD like so:

.98 x .92 x .91 x .94 x .93 = .717 => 71.7% HDDs survive 5 years

This means there is a 28.3% chance an HDDs will need to be replaced in a 5 year period, and additionally there is a small chance that replacement will fail in the remaining years, that replacement’s replacement will fail, and so on.  However, calculating the probabilities for these subsequent failures involves going down several branches of conditional probabilities and will not amount to more than a few percentage points difference.

Based on the relative AFRs claimed by Seagate, we will estimate that SSDs have a 2/5ths lower 5 year failure probability than HDDs, putting an SSD’s at a 16.98% 5 year failure probability.  I don’t really know if Google used mainstream or enterprise HDDs, I have heard rumors that they just use lots of low end hardware and cluster it, but we will give enterprise HDDs the advantage by saying they are 1/4th more reliable than the HDDs Google tested putting them at a 21.225% 5 year failure probability.

If we wanted to be really accurate we would actually calculate the survival rate per year and calculate the cost to replace based on the depreciated retail prices of the same drive, however for the sake of simplicity we will assume the drives cost half as much to replace as it initially did to purchase.  On average this would mean a $100 drive that fails 25% of the time in a 5 year lifetime will cost an average of $12.5 (half of $25) to replace per drive for the first 5 years of it’s life.

Mainstream HDDs: .283 x $38 (160GB) / 2 = $5.38

Enterprise HDDs: .21225 x $260 (147 GB) / 2 = $27.59

SSDs: .1698 x $260 (128 GB) / 2 = $22.07

I find it interesting that the cheapest 147GB enterprise HDD and 128GB SSD are the same price on NewEgg, and additionally are both 2.5″ form factor.  One might think these SSDs were priced to be competitive with the high performance enterprise HDDs, but on the other hand SSDs’ prices are supposedly determined by flash prices.

Now we sum the purchase, power/cooling, and replacement costs to give us a 5 year cost and also a 5 year cost per gigabyte:

Mainstream HDDs: $38 + $5.38 + $85.41 = $128.79 => $128.79 / 160GB = $.80/GB

Enterprise HDDs: $260 + $27.59 + $167.02 = $454.61 => $454.61 / 147GB = $3.09/GB

SSDs: $260 + $22.07 + $10.15 = $292.22 => $292.22 / 128GB = $2.28/GB

We see that a HDD costs an estimated $75 more to power and cool.   What these $/GB calculations show is that when you take in to account the long term costs of each type of drive, the gap between mainstream and SSDs is closed significantly.  Rather than an SSD be 20 times the cost, it is now less than 3 times the cost of a mainstream HDD.  It would be even closer if we had considered some of the 10K drives, but on the other hand we didn’t consider the $/GB of mainstream terabyte HDDs that would probably be around a 5 year effective cost of $.20/GB.


What I really wanted to glean from this cost analysis is how much of a premium I would be paying for a high performance 128GB SSD.  They roughly outperform HDDs by 10 times, require significanlty less cooling which translates to less infrastructure and space, are more reliable, and are completely silent.  However they cost 20 to 30 times as much as a mainstream HDD.  This seems a hefty premium to pay for performance.  However, when we considered the reduced heat and power consumption of the drive, then the SSD was less than 3 times as expensive as the mainstream HDD of similar size!  Now that factor of ten performance increase, quiet operation, and reduced cooling infrastructure doesn’t seem so expensive.  With SSDs getting cheaper they will find their place in the mainstream as laptops and desktops as at least a main OS and application drive.  As sales increase and storage density increases, we will hopefully see prices fall fairly quickly for SSDs.  However the flash thumbdrive market had a pretty good run and may have already sucked out the initial rapid price drops that we hope for, and we may only see the usual steady decline that we see with most other silicon based products.  My one fear is that SSD densities may already be close to a slow rising ceiling, since they already rely on 32nm processes, there are ever increasing challenges as chip developers work toward the smaller semiconductor processes.  A smaller process size means more transistors can fit in the same space, and generally this means you get more GB for your money.  This is what is loosely known as Moore’s law, which some speculate is reaching a slowing point.  It may turn out that Moore’s Law is more logarithmic than linear.

Getting back to cost analysis, when compared to the enterprise HDDs the SSDs are cheaper as well, and from spot checking various benchmarks it seems the SSDs can outperform them on average, although I can imagine there are probably specific access patterns that the SSDs may fall behind on.  So the SSD seemingly wins on all fronts against the enterprise HDD.  Additionally, the comparison with the enterprise HDD is much fairer since the drive we compared against is in a $/GB sweet spot for 15K RPM drives.  With the hope for improved reliability with SSDs, we would hope for reduced costs in dealing with failed drives and the risks they pose.  Even when in a RAID 5 array, a failed drive leaves the array vulnerable, where an additional drive failure will compromise the array.  Additionally the potential for user error and other risks when swapping the failed drive pose additional risks to the array.  If I were a server administrator, I would seriously give SSDs a try.  There will probably be new challenges, but it’s worth working through the initial challenges.  Too often I see people incorrectly assess a challenge instead as a road block, and thus give up without looking for a solution to that challenge.  One issue I’ve not seen addressed yet is the lack of TRIM support by RAID controllers.  Additionally, there are alot going on in the firmware of enterprise HDDs and RAID controllers that manage error checking and error handling.  I don’t know all the details of these algorithms, but some of them seem very specialized to dealing with various nuances present only in spinning disk drives.  I expect it will take some time for software and firmware for SSDs to mature and bring out the full potential benefits that SSDs could bring to enterprise storage.  There will likely be rumors, gossip, and debate all along the way about what the best approach is.  In truth the people who have spent their whole lives writing these advanced algorithms will probably go off into a corner, reassess the situation, and do a far better job than any of us could.


Yes I know it is unfair to refer to 15K RPM 2.5″ enterprise drives generically as “enterprise HDDs” since there are cheaper enterprise drives, but for the sake of simplicity I just wanted to consider the high performance drives since we are considering SSDs for their high performance characteristics.  In retrospect I wish I had done the same for mainstream HDDs by considering one of the mainstream high performance 10K RPM drives.

March 31, 2010

Working Around Databases With Lower Compatibility Levels

Filed under: Uncategorized — aaronls @ 3:21 pm

I was attempting to write a PIVOT query in SQL Server 2005, when I received the following error:

Incorrect syntax near ‘PIVOT’. You may need to set the compatibility level of the current database to a higher value to enable this feature. See help for the stored procedure sp_dbcmptlevel.

The reason for this was that the database I was querying against was set to SQL Server 2000 compatibility mode, which I verified by running this query and seeing the result of ’80’:

select compatibility_level from sys.databases where name=db_name()

To work around this issue I used the database selection drop down menu in Management Studio to select a different database that was running in 2005 compatibility mode and ran the query from that database against the original database.  To accomplish this, I had to modify the query to use a three part naming convention such as:

SELECT * FROM DatabaseNameHere.dbo.TableNameHere

The fact that new language features work against old databases in this way implies that the database from which you run the query has some hand in translating and possibly executing the query against the database with the lower compatibility level.

There are probably many hangups to using this work around for many scenarios.  In my case I was just running some adhoc queries through Management Studio to perform some data profiling for an ETL project.  If I were creating views or stored procedures I’m not sure how you would implement this workaround.  Can you simply run the create statement from the 2005 database against the 2000 database such that it creates the stored procedure in the 2000 database, or would you need to create the stored procedure in the 2005 database?   If you find you actually have to partition out these stored procedures and views into their own database, then make sure you document it well, and really weigh the cons of the complexity this would add to your project.

It is a fairly simple process to upgrade a database to a newer compatibility level, and the only reason I didn’t go this route was it would have only been for one query, and would have introduced significant risks in terms of the need for testing and verification that the change didn’t adversely effect front end applications.

If worse comes to worse, there are probably alternative ways you can write a query such that it will be compatible to the 2000 database.

February 2, 2010

Retrying on exception conditionally.

Filed under: Uncategorized — Tags: — aaronls @ 3:27 pm

This is a C# utility class I created for situations where you might receive an exception, perhaps from a failed network operation, and want to retry the operation.  You could use it in a variety of ways, maybe pausing between each exception, prompting the user to correct an issue or with a “Retry?” prompt.  The Example() method below shows usage of the class(you should delete the example method before trying to use this class in your own code).   Basically it is structured much like a try/catch block, the first delegate is the potentially error throwing code you want to try, and the second block handles the exception, deciding whether to retry again or give up, which is signaled by returning true to retry again or false to give up.  The class could be improved by changing this return value to an enum with a Retry.Continue and Retry.Cancel values, or maybe even changing the OnException to be an event with a ExceptionEventArgs class that has would allow the developer to do something like args.Cancel=true.

  public class RetryOnError
      static void Example()
        string endOfLineChar = Environment.NewLine;
            //attempt some potentially error throwing operations here

            //you can access local variables declared outside the the Retry block:
            return "some data after successful processing" + endOfLineChar;
        new RetryOnError.OnException(delegate(ref Exception ex, ref bool rethrow)
            //respond to the error and
            //do some analysis to determine if a retry should occur
            //perhaps prompting the user to correct a problem with a retry dialog
            bool shouldRetry = false;

            //maybe log error

            //maybe you want to wrap the Exception for some reason
            ex = new Exception("An unrecoverable failure occurred.", ex);
            rethrow = true;//maybe reset stack trace

            return shouldRetry;//stop retrying, normally done conditionally instead

    /// <summary>
    /// A delegate that returns type T
    /// </summary>
    /// <typeparam name="T">The type to be returned.</typeparam>
    /// <returns></returns>
    public delegate T Func<T>();

    /// <summary>
    /// An exception handler that returns false if Exception should be propogated
    /// or true if it should be ignored.
    /// </summary>
    /// <returns>A indicater of whether an exception should be ignored(true) or propogated(false).</returns>
    public delegate bool OnException(ref Exception ex, ref bool rethrow);

    /// <summary>
    /// Repeatedly executes retryThis until it executes successfully with
    /// an exception, maxTries is reached, or onException returns false.
    /// If retryThis is succesful, then its return value is returned by RetryUntil.
    /// </summary>
    /// <typeparam name="T">The type returned by retryThis, and subsequently returned by RetryUntil</typeparam>
    /// <param name="retryThis">The delegate to be called until success or until break condition.</param>
    /// <param name="onException">Exception handler that can be implemented to perform logging,
    /// notify user, and indicates whether retrying should continue.  Return of true indicates
    /// ignore exception and continue execution, and false indicates break retrying and the
    /// exception will be propogated.</param>
    /// <param name="maxTries">Once retryThis has been called unsuccessfully <c>maxTries</c> times, then the exception is propagated.
    /// If maxTries is zero, then it will retry forever until success.
    /// </param>
    /// <returns>The value returned by retryThis on successful execution.</returns>
    public static T RetryUntil<T>(Func<T> retryThis, OnException onException, int maxTries)
      //loop will run until either no exception occurs, or an exception is propogated(see catch block)
      int i = 0;
          return retryThis();
        catch ( Exception ex )
          bool rethrow =false;//by default don't rethrow, just throw; to preserve stack trace
          if ( (i + 1) == maxTries )
          {//if on last try, propogate exception
          else if (onException(ref ex, ref rethrow))
            if (maxTries != 0)
            {//if not infinite retries
            continue;//ignore exception and continue
            if (rethrow)
              throw ex;//propogate exception
            {//else preserve stack trace

    /// <summary>
    /// Repeatedly executes retryThis until it executes successfully with
    /// an exception, or onException returns false.
    /// If retryThis is succesful, then its return value is returned by RetryUntil.
    /// This function will run infinitly until success or onException returns false.
    /// </summary>
    /// <typeparam name="T">The type returned by retryThis, and subsequently returned by RetryUntil</typeparam>
    /// <param name="retryThis">The delegate to be called until success or until break condition.</param>
    /// <param name="onException">Exception handler that can be implemented to perform logging,
    /// notify user, and indicates whether retrying should continue.  Return of true indicates
    /// ignore exception and continue execution, and false indicates break retrying and the
    /// exception will be propogated.</param>
    /// <returns></returns>
    public static T RetryUntil<T>(Func<T> retryThis, OnException onException)
      return RetryUntil<T>(retryThis, onException, 0);


February 1, 2010

Are self-signed certificates with https less secure than http alone?

Filed under: Uncategorized — aaronls @ 8:27 am

I was hoping to find a way to use a self-signed certificate over https to prevent eaves dropping.  I do understand that this is vulnerable to man-in-the-middle attacks.  Unfortunately, browsers seem to view this as less secure than http, because they display many alarming warnings that a user must click through that otherwise are not present when accessing an http website.  This sends a clear message to the user that a site with a self-signed certificate is significantly less secure than a http website.  I would argue these warnings are unwarranted, and at a minimum, a self-signed certificate site over https should appear to be no less secure than a regular http website.

I suppose if you are on a network where someone has managed to spoof a major banks website and use a self signed certificate, you’d want lots of red flags to go up to alert users.  However, if they have been able to compromise the DNS or have one of their own on the network that traffic is using, allowing them to redirect requests for the bank’s website, then they are just as capable of leaving the user on an http connection(because most users don’t type the https in the URL and rely on a redirection from the website) logging the traffic, and performing the SSL handshake with the destination website.   Thus the user will get no warnings at all(other than the lack of subtle visual cues, but these are nothing compared to big red warnings boxes).  So I feel like the warnings present for self-signed certificates really only have the impact of adding a few more relatively simple(compared to how far they’ve already come to perpetrate the attack) steps for the hacker.  Therefore, I would say that the warnings presented for websites using self-signed certificates do not significantly reduce the scenarios in which a connection can be compromised.

As an example, let’s say I would like to setup a website that is a casual online game.  I will be collecting non-sensitive data, but I would like to at least eliminate scenarios where a malicious party could eaves drop on the connection and use that to steal the user’s account for the game.  With a self-signed certificate and SSL, I can at least provide protection against eaves dropping scenarios.

So, in conclusion, I would have hoped that a website that uses a self-signed certificate would raise no more red flags than a regular http website.  Let me make clear that I don’t expect them to get the “this website is secure” seal of approval and visual cues that you get for secure websites, but I strongly believe the alarming warnings are very inappropriate.

However, johnath feels the warnings are appropriate because there are point and click programs that can log encrypted traffic by spoofing self-signed certificates. I don’t see how this makes such a scenario warrant warnings when the same logging can occur over http.  I feel that if one applied the same logic to http connections, then you would be displaying the same alarming warnings.  Maybe you believe that your average user knows http: means that the site is alarmingly insecure, and therefore alarming warnings are unnecessary.  As johnath points out though, you can get free certificates from that will not give you warnings in Firefox 3 because the root certificate is included.  I’m skeptical as to what the catch might be with these free certificates though, if there is a catch.

January 10, 2010

“I was unable to reproduce your issue” is no excuse! Log you fool! LOG!

Filed under: Uncategorized — aaronls @ 1:10 pm

Many times I’ve gone to Microsoft’s Connect Feedback site to report bugs about Visual Studio or SQL Server, only to find that the issue was already reported and a MS rep closed the issue due to inability to reproduce the issue. This is absurd! The fact that there are no detailed application logs that gives MS everything they need to know to determine the cause of an issue is inexcusable. They miss countless opportunities to improve their products, and it can be very discouraging for users who went through the effort to provide detailed bug reports.

When deploying solutions to a large number of users I found it to be critical that I have detailed logging for applications and installers. I worked out a system of logging for our .NET applications using log4net which allows for both rolling log files and limiting the space the log files occupy. It can also be configured at runtime to turn off or on logging, so that performance critical applications can run without the logging. For our installers we used the special logging build of NSIS along with some additional functions to support logging.

There were many occasions we were able to determine the cause of incidental or hard to reproduce issues by looking at these logs. In situations where the developer/tester does not have access to the user’s machine, being able to determine almost everything you need to know from the log, provided by the user, is invaluable. It also clears up the sometimes vague description of the problem and steps provided by the user. You can see exactly what the user was doing leading up to the issue.  There were even times that even with the logs, we could not reproduce because of the user’s unique setup, but we were able to determine why it was occurring, fix the issue, and then send the fix to the user for them to verify that it did indeed fix the issue.  How awesome is that?

If you ever hear yourself telling a user “I was unable to reproduce your issue”, then your product is to blame. This is not roulette, these are fully deterministic computers (the random numbers are not even really random). So don’t act like it is just bad luck or misfortune on the user’s part. It is your application’s design and lack of adequate feedback and logging that have failed. Logging is one of the easiest things to implement. So stop making excuses and start logging.

October 29, 2009

No Windows 7 driver for your printer? Use XP Mode.

Filed under: Uncategorized — aaronls @ 4:35 pm

I needed to print a coupon today, and found that there are no Windows 7 64bit drivers available for my Lexmark Z605 printer, despite the fact that Microsoft’s compatibility site claimed that there was.

So I wondered if I could print from a Virtual PC, and found this guide but I knew this would only work if the Printer was connected to a parallel port.  So try method 2 if your printer is on a parallel port.

Method 1 of the guide showing how to set it up as a network printer would not work because I would need a driver loaded on the host OS(in my case Windows 7 64bit).

This is where XP Mode comes to the rescue, because unlike older versions of Virtual PC, it does a great job of sharing USB devices with the guest OS running under Virtual PC.  If you don’t know what XP Mode is, it is just a fancy computer simulator which simulates you having a second computer running Windows XP.  Follow steps 3 and 4 here to download and install XP Mode and then install the Windows Update which adds Virtual PC for Windows 7 (Note that this Virtual PC is different from Virtual PC 2007, in that it has tighter integration with Windows 7).

After these two items are installed and you have rebooted, you can run XP Mode from Start Menu->All Programs->Windows Virtual PC->Windows XP Mode.  After you go through the steps to load up XP Mode, and you are at the Windows XP desktop, select your printer from the USB menu at the top.  What this will do, sometimes taking a few minutes, is transfer control of the USB printer from the Windows 7 host to the Windows XP guest in Virtual PC.  You should get a little popup in Windows XP from the sytray indicating a device has been attached once it finishes transferring control. Now you can download and install the Windows XP printer driver and setup your printer just as you would have done in Windows XP (32bit), which sometimes is a daunting process in itself.  Given that Windows XP has been around for almost a decade, I will leave you to the rest of the internet for detailed info on that.

So this provides a workaround for those who, like me, chose to dive into the 64bit version of Windows 7 and found that their printer is lacking 64bit drivers, or lacking Windows 7 compatible drivers entirely.

October 25, 2009

No audio in Windows 7 due to incorrect default settings.

Filed under: Uncategorized — aaronls @ 8:55 am

I have an older motherboard with a RealTek ALC882 chipset providing audio support.  I installed the “High Definition Audio Codecs (Software)” from, then rebooted, but still had no audio.  Clicking on the speaker icon in the systray and selecting Mixer brings up the volume controls which also showed the decibal meters jumping as I played sound, but I could hear nothing.   My soundcard/chipset, like many others, supports a technology that allows the driver to detect when speakers, headphones, or input devices have been plugged into jacks on the computer.  The problem was that the speakers that were plugged in were being defaulted to rear speakers instead of front speakers.

There are two things that one should try to resolve this issue.

First, right click the speaker icon in the systray, and select Playback devices.  Your computer may have several output devices listed here, which may be because you have some jacks supporting analog audio(most common; listed as Speakers), perhaps additional S/PDIF jacks supporting digital audio(in my case listed as Realtek Digital Output), and sometimes an HDMI output if your graphics card support an HDMI connection because HDMI can carry an audio signal as well on the same cable( list as ATI HDMI Output).  In my case I am only concerned with the Speakers playback device.  I made sure there was a green check mark on the icon for the Speakers indicating they are the default playback device (if not, rightclick and select Set as Default Device).  I then left clicked the Speakers to select them and click the Configure button at the bottom left. This takes you through a wizard for configuring the output.  Try to select the settings that your speakers support, and if this doesn’t correct your audio then click the Configure button again and try different settings in the wizard.  In my case I have a left and right speaker and a subwoofer, so I selected 5.1 Surround, then on the next screen deselected Center and Rear pair since I do not have those speakers.

Should this not work, you can also unplug and plugin the speakers to see if your audio driver detects the speakers being plugged in and asks you what kind of speakers they are.  In my case this was the source of the problem, as the RealTek driver was detecting them as a Rear pair instead of a Front pair.  When it prompted me I simply slected Front and this solved the problem immediately.

Troubleshooting Windows 7 desktop resolutions.

Filed under: Uncategorized — aaronls @ 8:22 am

After upgrading to Windows 7, users may experience problems where Windows 7 tries to use a resolution or refresh rate that is not supported by their monitor, which in most cases will just present a blank screen with a monitor message indicating something along the lines of “Mode not supported”.  For example, when connecting a Samsung 50″ HL50A650 DLP television via a VGA connection, only a few resolutions are supported and for those most only support a 60Hz refresh rate.  Windows 7 may incorrectly detect the monitor and default to a resolution or refresh rate not supported (users have reported a incorrect default of 59Hz to be a problem).  Since the user has only a blank screen, it is impossible to change settings to troubleshoot the issue.

1) First, the user needs to either boot into a VGA Safe Mode (via the Enable VGA Mode option accessed by pressing F8 during bootup) and hope the defaulted resolution is supported by the TV/Monitor, or temporarily connect a monitor that Windows detects correctly.

2) Now the user should download and install the latest drivers for their graphics card.  Then connect the problem TV/monitor, turn it on, and reboot the computer.  If the TV/monitor is not connected and turned on before booting up the computer, Windows may not be able to detect it correctly.

3) If step #2 did not solve the issue, the next step is to disable the Extended Display Identification Data (EDID), which should prevent the graphics driver from trying to guess the monitor’s capabilities, since it is clearly doing a poor job of guessing.  The following is how to do so using the Catalyst Control Center for thus running that driver suite with an ATI graphics card.  If anyone with this issue on an NVIDIA card is able to provide the equivalent steps, please do so in the comments.

For ATI Catalyst Control Center:

3.a) Right click an empty area on the desktop, and choose Catalyst Control Center.
3.b) In the top left Graphics menu, choose Desktops & Displays.
3.c) In the bottom left where a monitor icon is shown, click the black triangle and in the menu that is displayed click Configure…
3.d) Clear the checkbox for Use Extended Display Identification Data (EDID) or driver defaults.  This will prevent Windows from attempting to change resolution and refresh rate settings when it incorrectly detects the monitor during bootup.
3.e) Now set the Maximum resolution setting to what the monitor’s specifications indicate is it’s maximum, and also set the maximum refresh rate.  In the case of the Samsung 50″ HL50A650 DLP, set the Maximum refresh rate to 60Hz and Maximum resolution to 1920×1080.  Click Apply.
3.f) From the Graphics menu, select Desktop Properties.

3.g) Set the Desktop area and Refresh rate to a resolution and refresh rate known to be supported by the TV/monitor.  The Force… menu can be used to force settings in the case that the ATI driver doesn’t think your monitor supports those settings.  Be absolutely sure that the TV/monitor supports these settings, as there is a chance that it could be damaged by using incorrect settings.
3.h) Click Apply to save the settings, reconnect the TV/monitor you intend to use if you had swapped it out for troubleshooting, and reboot.

September 5, 2009

Primary Core Idea for Multicore Systems

Filed under: Uncategorized — aaronls @ 9:59 am

Due to the challenges of implementing truly multicore applications, many applications run better on systems with higher speed single core processors, rather than the typically slower speed multicore systems. Generally the multicore processors generally have greater value in their computing potential, but that potential is only reallized if all of the cores are utilized by a multithreaded application. Even when software developers go through the effort of spawning threads for some background work, this is sometimes intermittent, and there is often still a thread that does the majority of the processing. For windows applications, you often have a GUI thread that most of the work occurs on, and due to certain restrictions on how components in the GUI are accessed, you are forced to marshall background work back to the GUI thread.

Not being a computer engineer, I’m not sure if this is possible or feasible, but if there were a hybrid processor or system that included a single high speed core, and then several lower speed cores, then the speed issues with single threaded application would be mitigated while still giving you the value of a multicore system.

Perhaps a dual processor system that allowed a high speed single core processor in one slot, and a multicore process in the other slot. Single threaded applications would still run at reasonable speed, and multicore applications would be able to take full advantage of the value in multicore processors.

« Newer PostsOlder Posts »

Create a free website or blog at