LabMice.net - The Windows 2000\XP\.NET Resource Index Dell Business Weekly Promo
Home | About Us | Search |

Last Updated December 10, 2003

Daily Briefing - Archive July 2003

Welcome to our Blog! We've decided to start this web log as a way to communicate new changes to our site, discuss various happenings, and share occasional rants about a variety of topics (mostly tech related). We hope to keep it fun, interesting, and brief. And as always, we don't intend to follow any of the traditional blog rules. If you'd like to send us feedback about the site or comments posted in the Blog, just drop me a line at bernie@labmice.net
HomeBlog
 
      

Archive

July 2003
June 2003

 
 

 

 
Thursday, July 31
Every now and then, book publishers preview their current titles by offering a sample chapter on the web. By contract, these chapters can only be displayed for a limited time, so when they appear you need to read them immediately, or download them if you can. SearchWin2000 offers a "Chapter of the Week" feature which highlights samples from several current titles and is an excellent resource for any administrator. Starting next week we will be including these sample chapters as our link of the day every Wednesday.


Friday, July 25
More server outages this morning. This one appears to be a combination of network issues (hardware failure on a router) combined with some server side problems. I'm doing everything I can to resolve this issues with Interland, thanks for being patient.


Wednesday, July 23
It's been a busy month for Microsoft patches and vulnerabilities alerts. Typically, Microsoft releases one alert per week more or less like clockwork. This month there have been 8 new vulnerability alerts (3 of which are critical) in addition to a cumulative patch for SQL server. That should keep your test lab busy for a while! I've noticed that Microsoft seems to be doing a better job of addressing these vulnerabilities and providing more detailed information than they did a few years ago. While constantly rolling out patches is a real pain, Windows Update, Software Update Services, and Group Policy are making the job much easier than it was in NT 4.0. However, looking over the list of vulnerabilities, I can't help but notice how many are related to features that have been included on the operating system but have little use in a corporate environment. Direct X, Windows Media Player, Windows Messenger, and other dubious services all seem to generate a disproportionate amount of vulnerabilities. If your business doesn't need any of these services on their workstations, removing them from your default installation images may save your staff a lot of time rolling out new patches.


Tuesday, July 22
This evening CNET is reporting that security researchers in Switzerland have figured out a way to crack Windows passwords in seconds using lookup tables and a PC with 1.5GB of RAM. Hacking passwords on Windows 9x/Me has never been a big deal, and L0phtcrack has been used to break NT passwords for years. The NT Hash used in Windows 2000, Windows XP, and Windows 2003 are stronger than previous versions, but can be broken using this method as well, keeping in mind that this "hack" requires physical access to the machine. In a domain based network environment, only local machine accounts are stored on workstations and application servers. All of the network accounts are stored on the domain controllers which should be locked away in your server room. If your domain user accounts have passwords with less than 15 characters,
Windows generates both a LAN Manager hash (LM hash) and a Windows NT hash (NT hash) of the password for backwards compatibility with NT and Win9x. These hashes are stored in the local Security Accounts Manager (SAM) database or in Active Directory. If you don't have any legacy clients on your network, you can prevent the Windows from storing passwords in the weaker LANMan format by following the steps in KB Article 299656. What makes this attack noteworthy is that it doesn't rely on a dictionary attack, the machine was a standard PC using reasonably affordable hardware, and the results took seconds - even for complex alphanumeric passwords. The article does make a valid point that even the more current NT Hash could be a little more secure if it were seeded with random bits (which still wouldn't stop this attack, just slow it down.) Even with the better encryption, it would just be a matter of time before a new vulnerability was found. It will be interesting to see Microsoft's response to this...


Monday, July 21
Unbelievably, Interland issued a full refund of our hosting fees for the entire month! Things weren't looking so well this morning when Interland's abuse department responded to our requests to clarify what non-existent script lead to their shutdown of our site. Their reply was "it's not our job to troubleshoot your script." Nice. To Interland's credit, a support technician named Roby sent me an e-mail this morning stating he didn't like the reply either and is contacting the abuse department internally to try and resolve the issue. I also finally got a manager on the phone this morning (I called them, as my requests to have some call me have been ignored for a week), and I'm hopeful I can get this issue resolved today. Am I beating a dead horse? Maybe. My concern is that if some other source was causing the high CPU utilization, and they continue shutting down the site and claiming it was abuse without even attempting to find the root cause, I'll have even more outages. And I'd like to spend more time adding content to my site than chasing after my web host and wondering if any sudden outage or performance slowdown is being caused intentionally caused by their staff. BTW - The manager at Interland I spoke to this morning never did call me back as promised. (Not that I was holding my breath)


Saturday, July 19
I normally don't blog on Saturday, but there have been some interesting developments in my battles with Interland (our web host), and I thought I'd update the saga for the benefit of anyone else having similar problems with them. First off, I finally found there uptime guarantee buried in the web site. In short it states that
Interland guarantees that the BlueHalo hosting architecture will be Available 99.99% of the time each month (no more than 4 minutes, 23 seconds of downtime per month), and any affected Customer may obtain a credit equal to 10% of the Customer's monthly hosting fee for every 30 minutes of downtime (up to a maximum credit of 100% of the monthly hosting fee). Since I was down for 16 hours, we should be receiving a full refund. Except for the pesky abuse departments insistence that an alleged script we're running on the site caused the outage which resulted in their shutdown of their site. To date, they have not been able to tell us what script caused the spike in CPU Utilization (we aren't running any, but they aren't changing their tune), there is nothing in the web logs to indicate any abnormal activity and the Code Manager logs (which monitors scripts for errors) is empty. Looking through the trouble tickets via their online interface I discovered an entry by a technician who states he shut down our site on 7:30pm Monday and referred the issue to abuse (who didn't notify us of the shutdown until Tuesday 2pm). We were on the phone with tech support repeatedly during this period (Monday evening - Tuesday morning and afternoon) and were never informed me of the issue. They just shut us down with no attempt to contact me, discover the root cause of the problem, or even look over the site to verify that is it running server side scripts. I did discover that another site on the same server was shut down for the same reason (related?), the server was the subject of a Denial of Service Attack (Tuesday -Wednesday), and a hardware failure on both of the Windows and Linux BlueHalo servers later in the week. I've started the process of applying for a refund/credit for our hosting fees, but I'm unsure of how many hoops I'll need to jump through. According to their billing page, I need to provide cut and paste copies of my log files into the web form (they're 60Mb each, I don't think it will work), include evidence of tracert scans, and list the exact errors I received on the site. This seems odd, because the server support team already knows how much downtime their servers have had. It feels to me like a tactic to discourage customers from filing for refunds, or a cheap excuse to reject claims. (No tracert logs? No soup for you!) I'm already expecting them to reject the claim wholesale because of the bogus abuse allegations which are still unresolved. But I'm not giving up yet! Stay tuned for our next exciting episode when we'll hear Underdog exclaim "This is business class hosting?"


Friday, July 18
According to the National Bureau of Economic Research, the U.S. recession ended back in November and the economy has been growing steadily for the past 7 months with no signs of inflation. So why doesn't it feel like it? Because employment hasn't risen with the growth. Apparently companies have made so many gains in productivity and efficiency (presumably due to technology investments) that there has been no need to re-hire the people they laid off last year. Critics argue that continued unemployment and public fears about their economic futures will cause a decrease in spending, and throw us into another recession. Others argue that these efficiencies help companies compete and can lead to stronger future growth and more stable jobs in the future. So what does this mean for tech? The possibility of more spending. Many of the technology investments made in 1999-2000 are approaching the end of their lifecycles (typically 3-5 years) and need to be replaced. The launch of Windows XP and Windows Server 2003 may spur many companies still running on Windows NT 4.0 to finally migrate, and the upcoming advance of 64-bit systems could lead to server upgrades, increased software development, and more IT projects. Companies looking to maintain or improve the efficiencies they've gain through technology will need to reinvest in those systems. This means less layoffs in corporate IT departments, and a potential increase in contract work for both developers and administrators. While this is small comfort to anyone already unemployed, those with jobs in technology should start thinking about future projects that could benefit your company. Those without jobs should make the most of the downtime and upgrade their certifications and technical skills.


Thursday, July 17
Received an e-mail today from ranking.com claiming LabMice.net is now the 79,315 most visited site on the web. I still have a way to go to catch Windows & .NET Magazine (ranked 33,383) and Microsoft.com (ranked 11th) but I'll keep working on it! Either way, I think it's pretty cool we're in the top 100,000 and still growing. That is if Interland can ever resolve the technical issues with its servers. I'm sure you're all tired of hearing my rant about the lack of support and service over the last few days. Oddly enough, when I first migrated to Interland they offered a 99.99 uptime guarantee. Looking around their site today, I've noticed that the guarantee has vanished, as well as any copies of their service level agreements. I found an online rating forum for web hosts that featured several pages of customer complaints of Interland's poor service and other nightmares. Hmmm, I wonder why their stock is 87 cents a share? It's bad enough that 20,000 people couldn't connect to my site on Tuesday, I can't imagine being an online store and losing thousands of dollars in sales due to this fiasco. Maybe we will just have to break down and move to a dedicated server earlier than planned.


Wednesday, July 16
Sorry for the lack of updates today, but I've spent the entire morning on the phone with Interland's tech support trying to get the server stability issues resolved. The site has been up and down all day, and nobody seems to be able to provide a reason why just promises that they're working on it. Yesterday Interland's abuse department sent us an email claiming we were causing the outages because of excessive CPU utilization, but have not responded to our inquires for specifics. Requests to speak to a manager have been met with games of phone tags, erroneous transfers to the billing department, and generally apathy. Thankfully I'm using easymonitior.com's free service which sends me an e-mail every time the site goes down, and allows me to get tech support on the horn immediately and get the sever back up. I'm sure many of you are wondering why I'm still with Interland. In a nutshell, changing a web host (while not technically very complicated) doesn't really guarantee better service. In fact it could get worse. Every web host promises the moon, but a simple Usenet search usually uncovers a host of horror stories about the lack of tech support or customer service. A number of my colleagues have moved their sites from host to host, only to find that only the issues were different. They still had lots of downtime, and support hassles. Is anyone ever really happy with their web host? This site is very large and generates over 1 million page views a month which consumes over 40GB of bandwidth. I wouldn't trust it to just anyone. A dedicated managed server would be nice, but they usually run +300 a month. I am seriously considering ServerBeach, which offers unmanaged dedicated servers for about $120 a month but would require more time managing the server and less time generating content. If anyone has any first hand experience with a reliable web host that can handle the volume our site generates and can host us for under $100 a month, please drop me a line at bernie@labmice.net


Tuesday, July 15
As many of you are already aware, we have experienced several web server outages over the last 24 hours. We migrated to new servers at the beginning of the month to get away from a poorly secured Windows 2000 Server. The new server is a clustered array running Windows Server 2003 and IIS 6.0, and in theory should provide greater reliability. Our web host Interland has been very slow to respond to support requests, and our site was down most of last night (5:30pm - 2am) and again this morning from 7:51am until 2pm. To add insult to injury, Interland sent us an e-mail at 2:11pm claiming "
We have received a report that your site was using excessive amount of cpu causing server wide resource issues. We have suspended your site to maintain server performance. We will require a response to this notice by 2:00pm EST on Wednesday July 16, 2003 as to what your plan of action is to correct this issue." Since we don't run any server side scripts, and our web traffic is well within the support parameters, this seems ridiculous. It is possible that we have become the target of a denial of service attack, but so far we have no evidence to support this. Stay tuned for further developments.


Friday, July 11
No sooner had I finished yesterday's Blog, when the power supply on my HP Pavilion 760n started humming loudly and making weird noises. As luck would have it, I'm about a month out of the warranty. I'll buy a new case and power supply and move the internals over this weekend, but in the future I think I'll stick with Dell.


Thursday, July 10
PC Magazine released their 16th annual reader survey for hardware reliability and customer satisfaction. I wasn't surprised to see Dell at the top the list for desktops and eMachines at the bottom. Dell also scored very well in server reliability, but I was disappointed to see that HP and Compaq score so poorly in both desktops and servers. I have used all 3 of these company's servers in enterprise environments for years, and outside of seeing an abnormally high failure rate on Compaq power supplies, and the occasional controller card failure, I've never had any major issues. Unfortunately, Dell also scored poorly on laptop reliability, with the Latitude family actual fairing worse than the consumer oriented Inspirion line. (IBM, Toshiba, and Sony scored highest) Outside of hardware failures, there were also many growing complaints about Dell, Gateway, and HP's use of overseas call centers where personnel speak broken or heavily accented English. This trend seems penny wise and pound foolish, and I hope business will come to their senses soon.
(Canadians need jobs too!) On the plus side, a factor that significantly contributed to customer satisfaction was the reliability and ease of use of Windows XP, as well as advances in hardware. 37% of the readers surveyed said their systems never crash (compared to 7% for Win98), and I suspect that the overall reliability numbers are even higher. But I think things are moving in the right direction. These surveys echo the sales figures for the major manufacturers, and if they want to survive they'll have do a better job on all fronts. HP, Compaq, and Gateway need to improve their reliability or they'll continue to lose market share to Dell. Dell needs to improve customer service, or it will lose to IBM. And Microsoft needs to keep improving its Operating Systems (instead of adding new features) or it will continue to lose ground to Linux and Apple.


Wednesday. July 9
McDonald's announced yesterday that it was testing Wireless Internet Access at some of their restaurants in the San Francisco area, apparently following the lead of StarBucks and Borders which have already rolled out WiFi at a number of locations. Personally, I love being connected from anywhere and believe that getting more restaurants, coffee houses, hotels, and other public hangouts enabled as wireless hotspots is a step in the right direction. What I think is a big mistake is charging for it. In McDonalds case, $4.95 for 2 hours, and with StarBucks and Borders it's $6 a day or $30 per month. I'm all for ensuring that actual customers are using the wireless network and not just encouraging freeloaders to take up tables and bandwidth. But WiFi should be an incentive to attract more customers and encourage them into sticking around and buying more of your products, not an "add on premium service". It's obvious that wireless access would increase same store sales dramatically, do you really have to charge the customer again? Actually, I mean overcharging. Setting up wireless access in a store or cafe is incredibly cheap. McDonalds spends more on napkins or ketchup at location in a month than the would providing internet access, are they going to start charging for those items as well? I normally don't eat at McDonalds, but if I was on the road and knew I could access my email quickly at any of their locations, I would stop in and order food while I was connecting. I don't drink coffee either, but I would happily drop into a Starbucks and order tea for $2.50 if I could connect. I might even stick around and order a $5 sandwich. But add another $5 premium for the "privilege" of connecting to the web, and I'd rather wardrive for a few minutes and find an open hotspot. In another classic example of corporate "we don't get it" idiocy, McDonalds management is already talking about using the wireless access as a "channel" to push content. Great, I'm paying $4.95 for pop up ads when I'm already sitting in the restaurant. They must have hired the marketing morons from AOL. Here's an idea: Give out a daily password to each customer with their order and stop trying to bleed your customers at every turn. They notice.


Tuesday, July 8
In ancient mythology, Cassandra was a woman who had the gift of being able to predict the future, but lacked the power of persuasion. As a result, no one believed any of her predictions, including the Trojans who ignored her warnings as they wheeled a giant wooden horse inside their fortress. Over the years I've often felt a bit like Cassandra, and I'm not the only one. The former head of counter terrorism for the FBI repeatedly warned his superiors about Al Qaeda and Osama Bin Laden years before the attacks on Sept 11. (He was ignored, and later fired for stepping on a few toes.) NASA engineers warned management about the danger of the O rings months before the Challenger disaster, and pleaded with management for better video footage of Columbia, but were ignored again. Although few network management issues risk lives, they affect the health of your company in both the long and short term. So how do you convince management that threats to your network are real without losing your job? Try being persistent without being annoying. If your first attempt at informing management fell on deaf ears, regroup and try to find several independent sources (articles, books, online references, etc.,) that back you up. Highlight relevant sections, create a summary sheet that highlights your points and references specific sections of your research material. Wait a week or two before representing the material to your boss, and only give him the completed summary - not a piece by piece argument. Resist the urge to go over his/her head, and try to stay objective during the process. The more you become emotionally involved in the argument the greater your chance that you'll be ignored, become cynical about the process, and be labeled a "problem employee" Your goal is to make a positive change in your environment, not to turn the process into a test of wills between you and management. If management still ignores you, try spreading the word among your peers. If an entire group of administrators feels that the network is at risk, there is a greater chance that your warnings will reach the right ears and that your changes will be implemented. If your predictions are still ignored and the problem eventually blows up in management's face, never say "I told you so." You'll destroy any credibility you've gained and you'll be ignored the next time as well.


Monday, July 7
While most of us spent the holiday weekend grilling outdoors, enjoying fireworks, and catching a movie (T3 was great!), a few people choose to accept a hacker vandalism contest that awarded points for defacing websites. One point for Windows Systems, 2 points for Unix, Linux or BSD,  3 points for IBM's AIX, and 5 points for HP-UX or Apple OSX. Although the mainstream media jumped all over the news and a few sites were defaced, there was strong evidence from the start that the whole contest was a hoax, most likely spawned by script kiddies in Brazil. A number of security sites decided to lampoon the security industry and government agencies by faking their own defacements with the message "I panicked over the Defacement Challenge scare and all I got was this lousy defacement" The sites also featured links to alarmist news stories and other bits of hype over the fake contest. (To their credit, Symantec didn't fall for this one and had been downplaying it from the start.) Perhaps the media will learn their lesson and approach future stories with a little more skepticism instead of just selling headlines. If they had left this story alone in the first place, the contest would have been undiscovered and nothing would have happened. ;-)


Wednesday, July 2
One of our favorite sites, OSopinion.com, has been down for several days presumably because of financial trouble. Today, Wired news reported that Vmyths.com is at death's door for financial reasons. Every independent site I know is struggling to stay a float, and worried about the future. Watching the dot com death pool in 2000 was amusing because it was filled with companies that were started as harebrained get rich quick schemes. Watching independent sites that provide free information close is heartbreaking. Supporting a web site from advertising alone is getting more difficult by the day, and ad rates are still falling. As a result, many independent sites (including us) have established a system for accepting donations to help offset the costs of hosting fees and other expenses. In theory, this is a great idea. If each of our 400,000 monthly visitors sent us $1 a month, we could be advertising free and offer content that would rival all of the print publications combined. Unfortunately, fewer than 1% of visitors donate to websites which isn't enough to offset the costs of running a site. So what's left? The last alternative is subscription based sites - essentially the death of free information on the web. Many web sites already require some type of basic registration, and then charge for "premium" content. This model would kill much of the web as a research tool. Imagine if every online magazine only allowed subscribers to access their articles. If ZDNet, CNET, IDG, and all of the other publications started charging for access? What if Microsoft charged for access to TechNet (as they do for their Premier support site). A web without ad based or user based support would wipe out a large percentage of websites leaving only commercial (business) sites, academic support information, the hobbyists, and newsgroups. While the last three categories would make the internet purists happy, it would shrink the web considerably. Could you imagine logging in to every site as you surfed the web? Or paying an annual (or per page) fee to access a website (tracked, of course, by Microsoft Passport.) Nobody likes intrusive banner ads. But nobody wants the web to turn into a "pay per view" channel either. So what's the happy medium? Could the PBS fund raising model work on the web? Will web advertising get more intrusive? Or will the internet become a toll road?


Tuesday, July 1
Today is Canada's Independence Day! It also marks the official end of support for NT Workstation 4.0, although NT 4.0 Server will be supported until December 2003. Many companies are up in arms about this, and I'm not quite sure why (outside of the cost of migrating thousands of PC's to a new OS). The operating system was released in 1996, it has almost no plug and play capabilities, very limited hardware support, and isn't as stable or secure as Microsoft's latest desktop operating systems. After working with Windows 2000 and Windows XP, I haven't missed anything about NT 4.0. I still keep a functioning NT 4.0 Server in the lab for compatibility testing, and actually have a copy of NT Server 3.1 just for fun. Going back and sitting at the console of either of these operating systems really brings home how much Microsoft has done to improve on the Windows NT family. Active Directory, MMC, Group Policy, better memory management, recovery console, improved backup/recovery options, EFS, and Windows Update (or Software Update Services for corporate LANs) are all compelling reasons to switch to Windows 2000 or Windows XP (NT 5.0 and NT 5.1). Losing support for NT 4.0 Server is a different story. Many environments aren't ready to migrate to Active Directory and are maintaining NT 4.0 Servers to function as domain controllers, while running Windows 2000 Professional on the desktop and Win2000 Server for applications and file and print services. In addition, several companies have devoted thousands of dollars to developing custom applications that run on NT 4.0, and they aren't ready to re-engineer those applications. Still, a handful of legacy servers in an environment of thousands shouldn't be a big deal. So why the backlash? How long should Microsoft support older operating systems? Is 7 years enough? Should it be 10 years?

 

  

 


Send us your feedback!
If you have any questions, comments, or suggestions that would help us improve this page, please drop us a line and let us know!

Dell Business Weekly Promo

This site and its contents are Copyright 1999-2003 by LabMice.net. Microsoft, NT, BackOffice, MCSE, and Windows are registered trademarks of Microsoft Corporation. Microsoft Corporation in no way endorses or is affiliated with LabMice.net. The products referenced in this site are provided by parties other than LabMice.net. LabMice.net makes no representations regarding either the products or any information about the products. Any questions, complaints, or claims regarding the products must be directed to the appropriate manufacturer or vendor.