Private Workspaces Increase Productivity

Dale Dauten, author of the Corporate Curmudgeon column, offers some empirical (real-world) proof of something I’ve known for a very long time: working alone substantially, measurably increases productivity. You can read a copy at the Arizona Daily Star (http://azstarnet.com/business/article_d636ff2d-b4b2-5ad7-93fb-c212397478a8.html). Here’s the Executive Summary:

“The best employees” have always been aware that their own efficiency makes them targets of resentment. What are their choices, assuming they want to fit into that kind of culture? Work slower – which is totally against their inclinations – or work dutifully for some percentage of time at work, and use the rest of their time for something they’d rather do. Google recognizes this, and explicitly allows their employees about the equivalent of a day per week for their own projects. Some of those projects, by way of no surprise, turn into huge money-makers. 3M does the same, and think about some of the results (I challenge you to name one hugely popular product that came from this policy).

Dauten’s conclusion, at least on the surface, is that “Many companies could use less management face-time and more back-time.” People work more quickly, with fewer errors, when they work alone.

The Albuquerque Journal ran this column in their Business Outlook insert, with the headline “More freedom, privacy yield more work done.” The Arizona Star headlined it “Cut back on close supervision – and see what great things evolve.” I think both of these miss the point. It’s not that ALL people work better with less supervision. I’ve done my time in management, and I know that a substantial percentage of employees I’ve managed will do little or nothing unless specifically directed, which has always been a disappointment to me. It’s been a delight, over the decades, to run into that occasional individual who displays enough initiative simply to be left alone.

But Dauten seems to understand the point. “Give employees more freedom, look the other way, and the best of them will do more than is expected, not less.” The critical phrase here is, “the best of them.”

Considering the critical nature of our work, fellow ITstas, we can’t afford to be anything other than the best of them. I know I’ve failed, in some past positions, to make it clear that I can’t effectively do highly complex work (like coding software modules) in a crowded setting; in fact I think very few people can. That’s why I work as a consultant: I can devote undivided concentration to my task, and frequently accomplish more in two hours than I could in two days in a bullpen. And bill accordingly.

Cyberwar: It’s Here, It’s Now. What Do We Do?

“The internet is inherently unsafe and should be replaced with a safer, re-architected alternative, says former White House cybersecurity advisor Richard Clarke.” (http://www.computerweekly.com/Articles/2010/10/13/243326/RSA-Europe-2010-Replace-internet-with-something-safer-urges-former-White-House.htm)

Consider that: totally replacing the Internet’s infrastructure as a cheaper alternative to our current hodge-podge of security. He’s talking about replacing every router (and the big daddies are very, very expensive), possibly every switch (since telco switching, not Ethernet switching, provides a lot of our backbone services), and likely all the other infrastructure that connects them. He’s telling us that’s cheaper than fighting our current battle, because that battle is doomed. Which is something to ponder, considering the potential scenario:

[H]e said, Iran was clearly a target of Stuxnet, described as the first known cyber weapon, and if tensions escalate, it is not impossible that Iran could retaliate in kind.

Remember what tiny Myanmar did to the Internet when it blocked YouTube in 2007: a simple (stupid) DNS (mis)configuration blocked not just Myanmar’s citizens from the site – but the most of the world’s as well. The scene: Iranian coders reverse-engineer Stuxnet and unleash it on, oh, every nuclear power plant on Earth. Whom would we nuke in response?

Maybe what we need are Rules of Engagement:

Establishing the rules of engagement around cyber war should be a top priority for governments, says Michael Chertoff, former US secretary of homeland security. (http://www.computerweekly.com/Articles/2010/10/14/243355/RSA-Europe-2010-Cyber-war-rules-of-engagement-39should-be-top.htm)

I chuckled when I heard about this idea, but after reading fuller discussion at the article linked above, I’m starting to see the sense of it. Rules of engagement? For war? Isn’t the idea to just blast the other guy? Well, yes. The idea is that you don’t bring an atom bomb to a knife fight.

Remember MAD? Mutual Assured Destruction? As in: if you bomb me I’ll bomb you too and we’ll all die. (Cue Dr. Stranglove laughter down an echoing hallway.) That was the Nuclear Doctrine. Don’t destroy us and we won’t destroy you.

It seems sensible to agree as gentlemen and scholars that we won’t take down your public infrastructure as part of warfare, as long as you don’t do it to us. Let’s not cause each other’s nuclear reactors to go hypercritical, shall we?

What I’m left wondering is, given a culture that specifically defines it as honorable to lie to an enemy, what are terrorists’ agreements worth anyway? They are neither gentlemen, nor scholars. In this game there is no Trust, But Verify. There is only Do, or Die.

“Stuxnet is going to be the best studied piece of malware in history”

“Stuxnet is going to be the best studied piece of malware in history” – Ralph Langner, at his Stuxnet Logbook,
http://langner.com/en/index.htm

The Stuxnet Event is like a hidden 9-11. The ramifications are huge, but we haven’t felt them yet. And it’s a hell of a read: infected Russian web sites, amateurish Iranian programmers and fantastically proficient hackers. It’s disturbing as well: there is a vast, huge, unimaginable project of catching up to do here. Catching up on all those industrial systems that are now connected to the Internet, but were never designed for security.

Well, they’d better be now. Otherwise the hacker community gleefully dismantling Stuxnet code is going to be tampering with everything from nuclear power plants to, as Langner likes to say, “the cookie plant next door.”

Oh. My. God. There is just so much work to do in security ….

Question: Am I Safe Recommending Joomla To My Clients?

On 10/13/10 5:29 PM, A. wrote:

Hi Glenn,

One of my clients had their site hacked over the weekend (in-house server).  This is what their server manager had to say about it — do you share this opinion?  Am I putting my clients at risk by recommending these programs?

A.


You should be aware that there are a couple of major security problems with Joomla/PHP/MySQL and its probably just a matter of time until the hacking happens again.  We can change the backup method to keep the content more up to date and just rebuild the server each time it gets hacked but over time that will be pretty expensive I suspect.

 

Hi A. –

Of course you know I smile when I read that.

Here are the issues:

MySQL:
Yep, open source, but also yep, managed by a company with a vested interest (Sun) and incidentally owned by Oracle.
Yes, there are potential security issues, especially with older versions (4.x).
No, it is in no way less secure than Microsoft SQL Server, if that’s what he’s arguing.
Here I really smile because I’m more than happy to demonstrate exactly that to your client.
If you really want an iron-clad database, get Oracle and pay for the support. It’ll be worth it if you’re Amazon.com, and less so if you’re smaller.

PHP:
Okay, I’ll call that bluff: name specifically which issues he’s talking about. Because:
What is the preferred alternative? If not open source (PHP, Python [which are the languages Amazon and Google are built on, by the way], Ruby/Rails, etc.) then “closed-source,” meaning either a .NET or pure Java implementation.
Java? Got aeons to develop your software?
.NET? Secure? Are you really serious?
Nope, there are no secure languages, only good programmers. One has to choose: the closed model of Microsoft or the peer-reviewed model of all academia, the scientific community, and open-source languages.

Joomla:
Yessirreebob, Joomla has vulnerabilities. ALL frameworks do.
What your responsibility to your clients is, keep their Joomla sites patched to current version. I grind my teeth at keeping my own site up to date, but it’s something you’ve gotta do.
My ISP kindly warns me to update if I get too far behind, but frankly, it’s a laughably easy process – built right into the menu system!

So no: you are not irresponsibly endangering clients by recommending Joomla, no more so than you would be with any other platform.
You are, of course, pointing out to them that they’re choosing a free alternative, and they have every right to choose to pay.
But read a Microsoft EULA some time: they strictly repudiate any indemnity for suitability of software, security, financial loss or anything and everything else. So you certainly have no legal recourse against them.
Then read the papers: this virus, that exploit, this cool SQL hack. Does anyone really believe Microsoft’s products *are the most secure?*

Since you ask for my opinion, this is it, with all the usual caveats, namely, anyone can exploit human nature to defeat security.
How, exactly, was their Joomla site hacked?
The most common exploit is this: an administrative assistant receives an email, quite official, that they have submitted a password change, could you please click here to accept it?
90% of AAs will click here.
This is not a Joomla exploit, this is a human exploit.
Just curious…..

By the way, your sysadmin could take an image of (Ghost) the server and restore it in minutes. That’s a standard practice I recommend to all my clients. Not keeping server images costs many, many hours when restoration is necessary. Just saying.

Hope this is useful –
Glenn

HP Interview Questions

Here is a valuable summary submitted by Dennis H., who interviewed multiple times with HP in Albuquerque. I have to give him credit – he managed to recall most of these after the fact, which is probably much better than I could do.

Wednesday, March 4, 2009                    Interview w/Ray Crawford

Hardware Questions Asked:

1.  When you open the panel of your PC, what do you see?

2.  You are at a Windows XP command prompt.  How to you display list of files, directories, subdirectories?

3.  How do you change to a different directory?

4. How do you create a directory?

5. How do you install and use a new disk drive?

6. What are the various windows XP versions available?

7. When you try to boot up the PC, it asks for a password.  The person who knows the password is away on vacation and not reachable? What do you do?

8. How do you display the IP settings for you PC?  You are at a Windows command prompt.

9. You just installed a new CD player on your PC.  However, the system doesn’t recognize a CD you placed in the player.  What can be the problem?

10. How do you make a bootable floppy on a Windows XP system?

11. How do you do a system restore from a previous system backup?

12. How do you adjust the speed and duplex settings of your network connection (or modem) settings?

13. You have a Windows Vista OS.  What does the UAC (User Account Control) do for you?

14. Why use safe mode?

15. You have reset the monitor/video settings to 1024 x 728.  The monitor is fuzzy and out of focus.  What can you do to fix the problem?

16. For TCP/IP network, what do you need to know to set up an IP address?

17. What are the various OS versions of Windows Vista?

18. What is the difference between 801a/b/g for a wireless network?

Thursday, March 5, 2009 (10:30 am)        Interview w/Nancy Menard & Jason Caldwell

Hardware Questions Asked:

How do you upgrade memory?

How do you know what type of memory you have and what increment(s) of memory you need to install more memory?

How do you actually do the memory upgrade?

So, your windows XP system is running slowly, and you are going to upgrade from 2- gig to 8-gig memory.  You do the install, but the system says you have 3.2 gig available.  What happened? (What went wrong?)

What are parallel and serial buses?

Using Backtrack 4: Information Gathering: Route: tctrace

tctrace

Discussion:

From http://phenoelit-us.org/irpas/docu.html#tctrace:

TCtrace is like itrace a traceroute(1) brother – but it uses TCP SYN packets to trace. This makes it possible for you to trace through firewalls if you know one TCP service that is allowed to pass from the outside.

Notice that qualification: You have to know at least one TCP service running on the host. There are, of course, numerous ways to discover this, for instance using DNS records (on the internet) or a simple NET SHOW.

Stage:

Information Gathering

Home Page:

http://phenoelit-us.org/irpas/docu.html#tctrace

Tutorial:

From http://phenoelit-us.org/irpas/docu.html#tctrace:

Usage: ./tctrace -i eth0 -d www.phenoelit.de

 -v		verbose
-n reverse lookup answering IPs (slow!)
-p x send x probes per hop (default=3)
-m x set TTL max to x (default=30)
-t x timout after x seconds (default=3)
-D x Destination port x (default=80)
-S x Source port x (default=1064)
-i interface the normal eth0 stuff
-d destination Name or IP of destination

Using BackTrack 4: Information Gathering: Route: tcptraceroute

tcptraceroute

Purpose:

To perform a traceroute into a network when firewalls prevent using ICMP or UDP for normal traceroute probing.

Discussion:

From http://michael.toren.net/code/tcptraceroute/:

tcptraceroute is a traceroute implementation using TCP packets.

The more traditional traceroute(8) sends out either UDP or ICMP ECHO packets with a TTL of one, and increments the TTL until the destination has been reached. By printing the gateways that generate ICMP time exceeded messages along the way, it is able to determine the path packets are taking to reach the destination.

The problem is that with the widespread use of firewalls on the modern Internet, many of the packets that traceroute(8) sends out end up being filtered, making it impossible to completely trace the path to the destination. However, in many cases, these firewalls will permit inbound TCP packets to specific ports that hosts sitting behind the firewall are listening for connections on. By sending out TCP SYN packets instead of UDP or ICMP ECHO packets, tcptraceroute is able to bypass the most common firewall filters.

Stage:

Information Gathering

Home Page:

http://michael.toren.net/code/tcptraceroute/

Tutorial:

tcptraceroute.8.html

HTMLized manual page

examples.txt

Real world examples

Using Backtrack 4: Information Gathering: Route: protos

protos

Purpose:

From /phenoelit-us.org:

Protos is a IP protocol scanner. It goes through all possible IP protocols and uses a negative scan to sort out unsupported protocols which should be reported by the target using ICMP protocol unreachable messages.

More accurately, protos reports back on *supported* protocols for a particular host or router. This information is valuable because it may indicate alternate pathways to exploit. For instance, if ICMP is blocked, meaning you can’t use ping or traceroute, you could try a tool with the same functionality that works over a different protocol, for instance arpping.

Stage:

Information Gathering

Home Page:

http://phenoelit-us.org/

Tutorial:

http://phenoelit-us.org/irpas/docu.html#protos

Protos is a IP protocol scanner. It goes through all possible IP protocols and uses a negative scan to sort out unsupported protocols which should be reported by the target using ICMP protocol unreachable messages.
Usage: ./protos -i eth0 -d 10.1.2.3 -v

 -v		verbose
-V show which protocols are not supported
-u don't ping targets first
-s make the scan slow (for very remote devices)
-L show the long protocol name and it's reference (RFC)
-p x number of probes (default=5)
-S x sleeptime is x (default=1)
-a x continue scan afterwards for x seconds (default=3)
-d dest destination (IP or IP/MASK)
-i interface the eth0 stuff
-W don't scan, just print the protocol list

Normal output for a Windows host looks like this:

 10.1.1.4 may be running (did not negate):
ICMP IGMP TCP UDP

While a cisco router supports more:

 10.1.1.1 may be running (did not negate):
ICMP IPenc TCP IGP UDP GRE SWIPE MOBILE SUN-ND EIGRP IPIP

Using Backtrack 4: Information Gathering: Searchengine: goorecon

goorecon

Purpose:

Using Google to do two things that increase your subject’s attack surface:
Enumerating subdomains, and
Harvesting email addresses.

Discussion:

In the “final” release of BackTrack 4, perhaps just my copy of goorecon was broken. I putzed around hacking the script, but eventually simply renaming goorecon.rb then running

gem install goorecon

solved the issue.

Tutorial:

http://www.question-defense.com/2010/05/29/backtrack-4-information-gathering-search-engine-goorecon-find-emails-and-subdomains-using-google#more-6122

Using BackTrack 4: Information Gathering: Route: netmask

netmask

Opening Instructions:

Usage: netmask spec [spec …]
-h, –help                    Print a summary of the options
-v, –version                 Print the version number
-d, –debug                   Print status/progress information
-s, –standard                Output address/netmask pairs
-c, –cidr                    Output CIDR format address lists
-i, –cisco                   Output Cisco style address lists
-r, –range                   Output ip address ranges
-x, –hex                     Output address/netmask pairs in hex
-o, –octal                   Output address/netmask pairs in octal
-b, –binary                  Output address/netmask pairs in binary
-n, –nodns                   Disable DNS lookups for addresses
Definitions:
a spec can be any of:
address
address:address
address:+address
address/mask
an address can be any of:
N           decimal number
0N          octal number
0xN         hex number
N.N.N.N     dotted quad
hostname    dns domain name
a mask is the number of bits set to one from the left

 

Purpose:

Netmask makes a polite ICMP netmask request of a given host, by IP address or hostname. The host will reply to this perfectly normal request with its subnet mask.

This is not insignificant: knowing the subnet mask of an internal network is critical to communicating with the hosts within that net.

Clever devils will use non-standard subnet masks to obfuscate their networks. If my target’s internal address is 192.168.2.14 and its netmask is /23, how long am I going to struggle to penetrate the target if I expect its netmask is the default /24? Not very long, if I get smart fast and use a tool like netmask.

One interesting note about subnet masks: everyone assumes the formal specification requires a continuous row of ones. Formally, it doesn’t, it’s just good practice. But consider the consequences of a single zero in the middle of the ones: You’d have a discontiguous address space! That means some hosts might have addresses like 192.168.0.14, while others could have addresses like 192.168.16.24. You would have an awfully hard time finding one or the other groups of host IPs.

I am not making this up; it was pointed out to me by a Top Secret network administrator. However, I cannot confirm actual networks using this method of obfuscation. Will this utility detect such a subnet mask? Good question….

Stage:

Information Gathering

Tutorial:

http://www.question-defense.com/2010/06/02/backtrack-4-information-gathering-route-netmask-an-address-netmask-generation-utility