If you tell enough stories, perhaps the moral will show up.

Showing posts with label policy. Show all posts
Showing posts with label policy. Show all posts

2009-10-11

Superstition

Strange email a few days ago -- a casual note from one of the Exchange admins asking me to approve enabling a batch of accounts. Rather than just refuse it out of hand, I took a look at the list -- to find a mixed bag of service accounts and shared mailboxes.

For why? Well it appeared that they had been having difficulty archiving some boxes and noticed that the affected accounts were all disabled. Proof of a good reason? No. Plenty of other boxes are disabled -- our leavers process depends on archiving the boxes of disabled users, and shared box accounts are permanently disabled by policy.

I don't know how this will turn out, but it won't be fixed by the enable flag. I don't care, as the lesson I want to draw is a little different. Superstition in IT is one of the greatest impediments to security rectification.

If I had let that request go -- after all, what do I know about Exchange? and even if I was right, they might have learned something -- If I had followed a cautious "support the admins where you can" rule, a new superstitious belief would have been created. "If there's an archive problem, make sure the mailbox is enabled". And those boxes would never be disabled again -- after all, who goes looking for trouble? And we would have acquired a vast new list of unmanaged accounts for no purpose at all.

When I started, my first rectification was to get rid of the shared domain admin account. It was easy enough to issue DAs to colleagues who needed them, but the next stage, removing the shared account, was much harder. It was protected by superstition. Apparently, all sorts of stuff would break if I canned it or changed the password, it had been tried once and bad things happened, though nobody could remember what.

Now, that risk was real, given the usage of the account, but I knew the possibilities. It wasn't the replication account, it wasn't used to build images, and there were no services running under it (that one took a script to prove). So after a good deal of fruitless argument, I just did it -- our change control was weaker then. Nothing broke then and I suspect that what broke in the past was co-incidence

The point is that people who are out of their depth, even just a few inches, will clutch at the first turd that comes bobbing by, and once clutched, they'll never let it go. It's not a moral fault, it's a feature of human psychology, and no doubt in the wild it has survival value.

In Windows security, most people are just slightly out of their depth, even though it's pretty simple (apart from ACL inheritance, obviously.) Even though they could reach the truth with just a little effort, they don't. Instead they seize whatever comes first -- co-incidence or just wrong observation -- and their survivalist mind starts building superstition. It's my job to knock it down and I do. I don't like pretending to be authoritative, even though I took the training. But in a case like this, it's the only way forward. I declined the request, explained my reason as far as I could without accusing the team of crass irrationality, and left it at that. We'll see.

2009-09-27

Secrecy

If you want to conceal your plan for a mass redundancy day, it's probably best not to book out every meeting room in the place all day....

2009-02-09

I Am My Own Regulator

We've all seen stories like this, and they're getting more common. I first noticed it when the NHS lost crown immunity back in, ooooh, 1986. One branch of government regulates another, finds a breach and issues compliance requirements. The more deranged cases actually have one office fining another. The only person punished is the taxpayer, as the overall costs of goverment rise. In theory, careers suffer, but in fact the civil service requires a consistent record of egregious failure to have any effect on an officer's final pension.

The absurdity does get media attention, sometimes, but the level of comment is muted compared with the gross mentalness of the situation. I think the problem is that the only reasonable conclusion to draw is rather unfashionable: there are things that are unsuitable, by nature, by structure, to be done by the government.

If Brent PCT had been a private insurer or HMO, the costs would be borne -- in a fair setup -- by the shareholders. Fair is the challenge here of course, but it's a question of reasonably hard-nosed negotiation when the contracts are let. "Fair", in this context pretty much means that regulatory consequences fall on the owners of the supplying firm. The dividend reduces, and the board decides whether the problem is severe enough to be worth fixing or insuring against or whether it was better just to take the hit. If the shareholders don't like that choice, they sell out, the price drops and the bag-holders sack the board.... And if the regulation is too hard to be borne, the supplier walks away and society gets a lesson in realism.

There's nothing available, structurally, to deliver the same result from a public sector supplier. Basically, all you can do is dock the pay of the managers, and watch your remaining sliver of talent in the civil service wither away. Except, you'll never succeed in touching their pay, and no-one who makes choices, no executive, will ever be motivated by any sharper spur than the desire to avoid a moderately difficult interview.

2008-09-06

ActiveX is Satan's Execution Environment. From Hell.

I went live with a simple but rather marvellous little change -- all the groups which deliver bulk machine or account admin privilege have been dropped into the group that denies browsing on the proxies. That's a huge win -- a vital step forward now that so many legitimate sites have been perved up to push BadSrc exploits and the Dear knows what else. The admins have two accounts, and if they want to browse from their workstation, they have to make sure it's not a member of any of the privilege groups. We're not mandating how the support teams arrange accounts, we're not touching anyone's permissions -- we're just declining to accept the risk of admin browsing.

It's good. I trialled on it myself and -- for six months -- on the domain admins. I gave support six weeks notice and a pile of reminders. I engaged with anyone who asked for advice on the technicalities. (It mostly boils down to using runas and getting a second explorer instance.) I've written a page on the support wiki, and for those who can't handle my writing there's advice from Aaron Margosis. It seems there are no tasks that require admin privilege browsing. Everything should be good, and our vulnerability surface hugely reduced.

Except for ActiveX. One of the Desktop team's top-twenty calls is to install or update an ActiveX applet from an external web site. And there's no way round it -- you do need to browse and you do need to be an admin, because what you're doing is exactly what malware does -- it's just that you happen to trust the site.

There's no need for this. I don't see ActiveX giving any better user experience than JavaScript -- it's just bad design. But it has to work.

I'm not going back. But:

  • It's pretty plain that this can't be handled with Windows permissions. ActiveX is too broken. And anyway the philosophy of this change has been to leave Windows access alone. 
  • So we have to look at the other side. When we do this at the moment, why is it OK? It's because the admin, reassured by the user, trusts the site to be safe, and required for business.
Naturally the block imposed by the no-browsing group is right at the top of the proxy policy. So I'm going to go in with a rule immediately in front of the block. If the user is a desktop admin, and the site is in a static list of "Approved for ActiveX" then the browsing is allowed, and the blocking group won't get a chance to take effect. There's an extra step to get new sites into the list but I don't think that will be too much inconvenience, and like the rest of this change, it's the sort of control we should have had a long time ago.

We have to settle who will approve sites into this list, but that's easy: I will.

Next step: probably to enable fast user switching on the desktops, to make life easier all round.

2008-08-05

Time for Tubby Bye-Bye, Meestair Bond

Well, the NMAAJS Daughter has been on Club Penguin for a month or so, and she's been enrolled as a secret agent. You get a tool to move around the site more easily, a range of mission games, a secret tunnel from the sports shop to the surveillance HQ and some fine clothing options like a bow tie and a tuxedo. (Why on earth would a penguin -- the world's most sophisticated bird -- need a dinner jacket?)

But the real meat is in the handbook. You have to report mean penguins and the ones who use bad words, so some harried moderator in Tucson or wherever can review the log and decide on an appropriate action.

Little do they know that the NMAAJSD has essentially no chance of spotting bad language -- we were watching two potty-mouthed puffins F Uing and F U 2ing and she had no idea what it meant. And this is the child who, on her fifth birthday, addressed the author of her being in these terms: "Just fuck off, Daddy."

Still, you have to give them credit. They're at least trying to make it fun to be a snitch, and that puts them a little ahead of the Staasi.

2008-07-07

Club Penguin Without Being Mad

Club Penguin is an MMPORG a bit like Second Life. Except that you can't use bad language. And your avatar is a Penguin. And it's owned by Disney. This is right up the Not-Mad-At-All-Just-Stubborn Daughter's street and for her ninth birthday treat she was subscribed.
So that's lovely except that the browser applet wouldn't connect.
Now by rights I ought to go off on a LUA rant here about the daftness of software for children that has to be admin to run. Except that CP is fine as an ordinary user and in fact I had an inkling what was wrong as soon as I saw the message.
So I went off searching and found this support page. Take a look at point four.

4. If none of these things work, you should call your Internet Service Provider (ISP). That is the company that you pay to connect to the Internet. They might be using a firewall that is blocking the ports that lead to Club Penguin. When you call them, tell them to open up these ports for TCP traffic, inbound and outbound: 3724, 6112, 6113, and 9875.
That's right, you have to open the ports, inbound and outbound without any limitation by address! "Sure I've got a hardware firewall, except that if you scan these ports you can reach a closed source server written by security numbskulls running on my daughter's PC..."
Long faces all round in the U household.
But it's actually OK. All it really seems to need is those ports open outbound, and it runs fine, with the NMAAJSD playing the mini games to her heart's content.
And that's the reply I expected to get when I opened the reply to my support enquiry. I'd asked for the server server addresses so I could limit the inbound traffic. What I got was a different list of ports (843, 9875, 6112, 3724, 6113 and 9339) with no reference to my questions about direction or limitation. This is software that's intended to be safe for children.
Nice try Walt. But Mad Aggy's happy, and that's what matters.

2008-03-05

This Job is Weirding Me Right Up

I was sitting next to a man on the train and noticed that he was looking at porn photos on his telephone. I didn't think "Blimey, that's a bit much on the train!" I didn't think "I wonder if that's his missus." I didn't even think "Ooh gissa look!"

No. The first thing in my head was: "I hope that's not a work phone."

2007-12-07

Insourcing Authentication

It's appraisal time and the focus is on the performance management system. That's outsourced -- Internet delivered and hosted somewhere in Florida.

The issue that was brought to me was concern that users might be saving their performance management password in the Internet Explorer credential cache. It's never something that's worried me very much -- if you lose control of your workstation session, you've lost a lot more than the right to express an opinion on that annoying support guy with the awkward questions....

But it tied up some ideas that have been rather weakly formed in my mind.

We're outsourcing more and more, and the result is that our users do their jobs with accounts on this system and accounts on that, and I have no real confidence that there's even a consistent list. I'm certain that there are some systems a leaver will retain indefinite access to, simply because the whole service was set up by the business with no IT involvement and the helpdesk will never know to cease the account. This is pretty galling when we've recently put so much work into the Joiners/Leavers/Absentees process and the unused account purge. We're actually getting on top of this, but it's slipping away though a side door. There's certainly no hope of enforcing a consistent account name or password complexity policy.

At the same time, to deal with the many sites like Blogger, Delicious and others that I use all the time from loads of PCs, I've been looking at OpenID, a public authentication system, that allows the administrators of an Internet hosted application to securely trust a logon completed at a different site. I've gone so far as to set up an OpenID on the Verisign test site, even though I've nothing to log in to it with.

So I've been toying with the idea that authentication was a service we could outsource -- to Verisign or perhaps a two-factor supplier. In fact, I had that exactly wrong. Authentication is the one service we can always do better than anyone else because no-one can know better than we do, who works for us. This is true even if we don't know very well ourselves....

So we shouldn't outsource -- we should insource. We should provide an OpenID service as part of our infrastructure support for application outsourcing. Then we become the authority on who works for us, and what tests they have to pass to prove it:

  • Log on from inside, and you just need a logged-on Windows session; log on from the Internet and it'll ask for your RSA token.
  • The helpdesk can cease your OpenID when you leave, so terminating access to services they don't even know exist.
  • The authenticator could decline to recognise remote applications completely or on a per user basis.
  • Choices about access to the dodgier stuff like the password reset tool, or remote access can all be made here.
So it would all be fabulous. Just a couple of problems:
  • There doesn't seem to be OpenID software with the flexibility and convenience I need, and
  • The chances that application hosts can be persuaded to recognise their customers' OpenIDs seems close to zero.
So this frankly rather wonderful approach, which ought by rights be standard, is dead. But I think I'll put OpenID support on the qualification form just to watch them squirm.

2007-11-22

How policy suceeds, for once

I've been purging out a dying domain. Disabled accounts with a last logon more than three months ago are deleted; enabled accounts with a last logon more than one month ago are disabled with a note in the comment. Do that every week or so. Keep a safe list for genuine service accounts and the domain will be nicely compliant by the time it stops.

The reason I've had to do this myself is a bit sad: the helpdesk, who own all account administration, will go through any distortion to avoid account difficulties. An odd-looking account -- precisely what should be disabled -- won't be touched for fear of breaking something. The policy itself gets re-interpreted to be "disable after ninety days" with no-one able to trace where that decision came from.

It's understandable. The best outcome from good application of the policy is that no-one complains. The likely outcome is senior staff complaining that the helpdesk has broken their account -- and no-one wants to hear that.

So, I've been doing it myself, and that makes everything different. Everyone knows that I break stuff, but everyone also knows that challenging me on what I break can leave them on the wrong side of a clearly distributed policy that they didn't read or understand....

Yes, and in this case I did a blinding job: The account policy allows just two types -- owned, which are subject to the AUP, and service which have to be on my list. The AUP says that owners are responsible for owned accounts, have to log on more often than once a month, and log off after no more than a week. That was carefully chosen to update the last logon time, and to transfer blame.

And it works! Hundreds of users deleted, a few tactful explanations, and no trouble at all. This is the root of the security truism that you start with a policy. You can't act without it -- but it has to be a good'un.

2007-11-21

Audits fall with autumn leaves

We've just been visited by one of the many audits to which a regulated firm is subject. We didn't come out as well as I would have hoped but the point for me was different and more worrying.

These were competent people. They were clear about their wants: Evidence that the controls we publish and claim to adhere to are actually working. And they knew what "working" meant -- that the circle is closed with human escalations and choices on exceptions. So that was good (and a lot of work for us) except for one teeny issue.

"Working" also means that the control environment will actually stop trouble. And these guys had essentially no interest in the technical effect of the controls. If I said "this is a report that shows yesterday's changes to all application admin groups", that was the truth. No test that we have the same reporting on all production DCs. No enquiry about alternative ways to get the privilege. No test that our installations actually adhere to the admin group conventions. If I listed a firewall policy, or handed over the perimeter network diagram, that was it. No enquiry about how often I checked the cable patching....

Now I know that they can't check everything. And I wouldn't want them to.... I know that they're at the wrong end of a crushing knowledge asymmetry. But all the same, it reminds me of the drunk searching for his keys under the lamp post: not because he lost them there, but because the light is so much better.

In the mean time, remember:

  • A big four signature on a statement of controls -- SAS70 or whatever -- means less than you think.
  • Somewhere in the big city, a security guy is neglecting controls that expose trouble in favour of those that'll audit well.

2007-10-05

The Approver as a Conceptual Bottleneck

I'm looking at a list of domains groups which control access to removable devices and media through Pointsec Protector (which used to be Reflex Magnetics Disknet Pro). We've had groups for various types of devices and now I'm trying to simplify -- to operate a much cruder level of control.

I'd prefer to leave it as it is, but the membership of the current groups is a mess. The technology is fine, but the control environment stinks. At present we allow or deny access to the groups based on a managers approval: "he has a business need to use a USB key" -- and that makes a good deal of sense. Who else can make that choice?

Who else indeed? Because the managers aren't technical -- so they don't understand what they're approving -- and they don't see any downside from insecure access. Essentially every request gets approved. And in six months time when the the lists have to be recertified, it's probably a different manager, or the original justification is forgotten, and it's easier just to agree. We've got adequate technology, and a process that the auditors think is just fine, and there's no real control at all.

Of course, this isn't just a problem for USB devices. It very easy to fail at this last hurdle by asking approvers to use a discretion that they just can't understand.

I've thought about making these accesses part of the permissions attached to the job description global groups. But that doesn't reduce the problem unless IT security can engage with the role definition approvers, and we don't.

So this is my plan: I'm going to name the groups after the risk, with alarming group names and descriptions:
Risk In -- "Trusted to read data from unknown sources"
Risk Out -- "Trusted to send corporate data to unknown destinations"
Device Risk -- "Trusted to attach untrusted and untested devices"

Let's see whether that gets the message across.

2007-05-25

Quickest Compromise

Browsing round Ikea today I saw sales workstations left logged on to a Windows console, and that set me thinking. Our AUP requires users to lock their workstations on leaving them because the default screensaver lock of fifteen minutes is easily long enough for a malicious passer by to compromise the whole network, and I think that's fair enough. But I wouldn't have fancied standing in front of one of those screens trying to hack Ikea for more than about ten seconds. "Hey you..." So what's the quickest possible way to carry out an opportunistic compromise?

  1. It's a real console -- a PC screen keyboard and mouse.
  2. The logged on user is not an admin or a power user.
  3. You can reboot (but not change a password), but the only boot device is the HD. USB, floppy etc. are all closed.
  4. Internet access is through a proxy server running a business-access-focussed site category policy
Extra credit for universal applicability, and evading basic security precautions:
  • ICAP server running signature checks on downloads
  • No access to root of C:\ or anything other than the local profile
  • Mo command line, regedit, ....
  • Minimal profile in the event and proxy logs
  • Hacked user can return to the console and notice nothing

I suppose the key points here are the exploit itself and the phone-home to control it. My mind is running to a binary exploit file, customised enough to pass signature checks, uploaded somewhere innocuous, and renamed after download to the desktop. The phone home is tougher.

2007-04-10

Bedtime

As it's the holidays, the kiddies would stay on the PCs all night. To get them into bed, stern measures are needed:

First you need to set the accounts they are using (you're not letting them be admins, are you?) to have fixed logon hours. You can't do this through the GUI on XP Home, so you need a batch file of commands like this one to set times when it's possible to log on. Call it accounts.cmd -- you'll need to re-run it when anything changes:

net user mmadson /times:sunday,08:00:-21:00;monday-saturday,09:00-21:00;saturday,08:00-21:00
The /passwordreq:no directive can be useful here too.

Unfortunately, Windows won't enforce that logoff. (A domain would, but Windows itself will not.) So the second step is to force it. There would be any number of ways to deal with this, but I chose the ugliest: run the the Sysinternals psshutdown command at bedtime. I chose to run it from a command file so that I could get a log. Put this text into enforcer.cmd, make the obvious modifications and set it up as a Windows scheduled task. (In the control panel, under Performance and Maintenance.)

@echo off
echo "-start-" >>d:\at\log.txt
date /t >>d:\at\log.txt
time /t >>d:\at\log.txt
d:\at\psshutdown -o -f  >>d:\at\log.txt 2>&1
echo "-end-" >>d:\at\log.txt
I set it to run twenty minutes past the last log on time. Hey presto! Instant rage from the younger generation.

The obvious improvement is to only log off console sessions which are members of the kiddiewinks group. It's really annoying when it logs ME off! I think I could script that up, but I'm too idle.

2006-12-21

The Rules

I turned down a system last month. It needed a user to be permanently logged on at the server console, which implies a password shared among the support team. The chances of that being tough and regularly changed are nil, so my vote was no.

We'll see if I can make that stick! But I'm content, because I've only applied a published policy. Project people think that security imposes strange and unnatural demands on system design, and I suppose it's true that the demands puzzle people. But they're not unnatural and they're not arbitrary -- just misunderstood. So as my contribution to public education, taken from the handout I send to project managers, support people and anyone I can find, here are the rules. They way I present them is a checklist -- tick every box and you're on the right track.

First we have the Exemption Checklist for changes and small implementations -- Tick every box here and I won't bother you:

  • No file, folder, registry or mailbox permissions changed or created.
  • System is explicitly permissioned by our standard groups and does not rely on “Everyone”, ”Authenticated Users”, ”All Users”, 0x??7, “Domain Admins” or ”Administrator” permissions to work.
  • No Windows local or global or Unix security groups are created, deleted or changed in meaning.
  • No impersonal domain user accounts (service accounts), or any local or Unix or special device user or admin accounts are a) created, b) get new group memberships or c) are admins.
  • All human users and administrators use their regular personal Insight workstation or app/admin/Unix accounts, and there are no shared accounts, and no non-Insight users.
  • No changes to external data transfers, network security configs (firewalls/acls) or external accessibility.

For larger changes, I need to hear about it earlier. Here's the standard advice for project managers contemplating a new system. Again, if you can't check every box, we need to talk:

First, how about Unattended Processing (UP)? That's any processing other than discontinuous console session on a user or administrator workstation.

  • All UP is on a server platform?
    (Servers are physically inaccessible. Console access is only granted to IT support users.)
  • All UP runs as a service or scheduled task?
    (Not on the console or in a terminal session.)
  • All UP runs without administrative privilege?
    (Not as Domain admin member, nor as server local Administrators member, nor built-in administrator including Local System)
  • All UP runs without a profile?
    (No requirement for logons using service a/c.)
  • All UP credentials stored in Windows SC password store?
Then there's Authentication of Users and Administrators
  • All work done with personal accounts?
    (No shared users)
  • Users and administrators authenticate using Windows workstation domain logons?
  • Users and administrators authorised by membership of domain global groups?
  • No user or admin credentials stored?
    EG in scripts or config files. (DPAPI and SC list storage is permitted.)
And finally there's the Application Structure itself
  • Admin privilege can be withheld from business users without impeding function?
    (Users are not admins -- we can keep admin functions on the support desk.)
  • Conformable with our app access model?
    (Role/Environment groups allow us to manage permissions through the helpdesk, using standard tools)
  • All resource access via application-specific group membership?
    (Excluding: Domain *, Everyone, Auth users…)
  • Administrative and security events logged in a supported means?
    (syslog, ftp upload, Windows event log, text file)
  • Will be supported on platforms kept patched up to date?
    (No vendor qualification of Windows patches)
  • Documentation identifies all resource permissions, and sensitive locations
    (config files, private keys)?
  • All Internet/external access via authenticated proxy?

Once every application can check off all these, we will be getting somewhere.

2006-10-20

Criminalise Your Enemies.

Is it strange that so much WAN traffic is unencrypted? That became a live issue for me when we were setting up a new recovery facility. Part of the project includes links between the machine rooms, and the service provider offered us a significant cost saving by using their network to replace a hop that would cost tens of thousands ordered from COLT. Everyone was happy except me. I saw it as a tap risk.

I hate taps. A network tap is one of the points where the balance tips in favour of the attacker. They are totally stealthy and very reliable. They can be serviced by a leave-behind -- a laptop running Ethereal or TCPdump with USB disks exchanged whenever the access can be had. The only real problem the attacker faces is getting access to a good network segment -- plugging in to a workstation LAN and risking an ARP spoof is going to get some user passwords, and that's not bad, but it's not the key to the domain.

But a trunk between machine rooms is another thing entirely. Modern domain traffic ought to be harmless if overheard, but console sessions on to the DCs, SNMP strings, enable passwords on switches ... One way or another, it's the place to be if you want passwords, not to mention seeing what the fileservers see.

So, OK, taps are bad. But is it any more risky to run our traffic over a service provider's network? The contract gives them a duty to keep our data confidential, and you won't find that in a service agreement from BT or COLT.

The short answer is the criminal law. Between the termination points of section 8 licensed telecoms providers like Colt and BT, special law applies: I think it's the Interception of Communications Act 1985, but anyway there are criminal penalties for tapping their systems without a warrant. They can't even do it themselves, and that's why there's no confidentiality in the contract.

The point here is not so much the penalties but the criminal liability. Evidence of a crime -- and an unexpected laptop stuffed with traffic logs is evidence -- lets the police investigate. Serious industrial spies always seek to operate below the radar of Babylon, and that makes for real protection.

IoCA is protection, but it's limited. It doesn't stretch beyond the endpoints. If we found a tap on the service provider's network, we could remove it, but no crime has been committed. To get any recourse we would have to mount our own surveillance and investigation, and that is a place I don't want to go.

We're sticking with the service provider's network, but some of the savings are going on hooking it through our firewalls with the encryption turned on.

2006-10-05

H. Sapiens

On Tuesday I was working with the owner of information risk on the information security policy. She's a jew and we were talking about her reflection on the day of atonement just gone. I was, and am still, upset by the stupid emails I've been reading as part of this current investigation. Jewish spirituality has that ancient focus on the ethical value of mindful compliance with God's law, and she compares that with the chaotic response of colleagues to our sane and reasonable policy, or even the idea of policy: "Everyone would much happier if we just obeyed the rules and got on with the fun stuff ....."

I know she's right, or at least I agree, but there's something else too, and as I groped for the words to express it, I looked around the open plan office and for a moment my vision changed. What I saw then was a colony of great apes, that third chimpanzee species, created by language and bipedalism on the journey from forest to office, but still the same animal: obsessed with rank and sexual display, endlessly inquisitive, endlessly communicating and endlessly systematising. And utterly unconcerned about rules that try to stop us being what we are.

When we accept law, we defy our own natures. Against resistance like that, the policy of the IT security ape is so much desert wind.

2006-07-21

How Security Policies Fail (5)

Policy: No plain text password storage.

Failure: The real failure here is my failing to find words able to describe this. Maybe I should have written: "no encryption technology more than a thousand years old...."

Private Function Encrypt(strPlain As String) As String
    Dim i As Integer, j As Integer, n As Integer
    n = Len(strPlain)
    Encrypt = ""
    For i = 1 To n
        j = Asc(Mid$(strPlain, i, 1))
        j = (j + 33) Mod 256
        Encrypt = Encrypt & Chr$(j)
    Next i
End Function

Public Function Decrypt(strCode As String) As String
    Dim i As Integer, j As Integer, n As Integer
    n = Len(strCode)
    Decrypt = ""
    For i = 1 To n
        j = Asc(Mid$(strCode, i, 1))
        j = (j - 33) Mod 256
        Decrypt = Decrypt & Chr$(j)
    Next i
End Function

2006-06-30

How Security Policies Fail (4)

Policy: No application data may be permissioned to Everyone, to Domain Users, Authenticated Users or to any specific user. All permissions must be on non-builtin groups.

Failure: There are ways almost without number to end up with ACEs referring to Everyone or some other uncontrolled group. The most pernicious is simply inheritance of wrong permissions -- the most annoying is the shamelessness of external staff contracted to install an application. Similarly, the easiest way to grant access is to grant it to the particular user -- no need to log on and off. It really does seem as though permissioning is the area where natural human laziness is exactly opposed to security.

So this policy is certainly not lazy -- the choices required are always harder and sometimes require an unpleasant confrontation. And it's the classic non-robust policy -- unpicking the permissioning scheme of a working app, without wrecking it, is hard. It doesn't help that there's no permissions register: you have to read ACLs directly off every file and resource.

In a harsher world than mine, any server admin who set an extra-policy permission would lose his access. Either he chose to breach policy -- it surely can't be that -- or he didn't know better in which case it's improper to allow him to be a machine admin until he's been retrained.

I've spent too much time casting around for a solution. The only approach is to dump permissions regularly, pick out the nasties and watch for deltas. That requires some heavy scipting.

2006-06-20

How Security Policies Fail (3)

Policy: Only our trusted workstation build may be attached to the LAN

Failure: Contractors and visitors need Internet action, sometimes at very short notice. The easy way to let them have it is to plug into one of the DHCP LANs.

This policy is fairly robust: it's not that hard to spot non-domain machines with an IP address, and the price of disconnecting is a brief argument about priorities, project objectives and timescales. But it is not at all lazy: it's incomparably easier to snaffle a cable from the desk next door, or even try outlets at random, than it is to order and pay for an ADSL outlet.

So we have to make a lazy route to Internet access. I see a three stage plan:

  • Deliver a "contractor convenience" VLAN through your switching infrastructure. This would have no internal routing -- just a cheap firewall direct to your Internet red side, with no inbound access, and outbound permits for browsing and VPN only.
  • Make sure there's no Internet from your internal DHCP LANs or printer LANs -- all attempts to browse direct fail at the firewalls
  • Make sure you can account for all outlets which do have unproxied Internet.
That will tip the balance of convenience your way: you should start to see all those laptops requesting access to the contractor LAN quite soon.

Stay on top of the risks, though. You want to make sure that your own users won't be hooking up to unfiltered Internet. You should probably arrange the workflow around contractor convenience to include an expiry date to ensure that the outlets get re-certified from time to time.

2006-06-13

Is that a Server? Or: Why you can't use domain service accounts on workstations!

What's a server? A server is a computer that you keep in the machine room. Why is that?

  • Well of course there can be a host of operational reasons. If you want to keep it running all the time, better install your box where the cleaner won't unplug it
  • And there are the security reasons. What are they? From the security PoV, what's a server?

The point really is that access to the physical console of a PC carries a risk that we accept in the case of workstations, but don't accept for other machines. The risk is controlled by controlling access and that's why we have cards, combinations, access logs etc on machine rooms.

I think it's interesting to take the components that traditionally make up that risk and look at the consequences for the machines we DON'T put in the machine room:

  1. The contents of the hard disk are confidential or valuable. Apart from the normal confidentiality of a fileserver or application server perhaps the build or install is hard to replicate. So we keep the box in a safe place. Implications for workstations kept in their dangerous places are:
    • Filing on the workstation C: is never right. Users should not be able to write to WS local drives, OR (the laptop solution) local drives should be encrypted with explicit backup responsibility transferred to the user.
    • You should be able to replicate any WS build, or you are hostage to any user who declines to give up their PC
  2. Local admin is available to anyone prepared to do a reboot. (You do so know how!) You definitely don't want attackers making themselves admins on your servers, so you lock them away. Workstations can't be locked away, and so their administrator accounts must each have a different, unpredictable password. Then, if I crack my own WS, I'm still not admin on any other, remote, WS. The same goes for any other local account -- so on workstations you probably shouldn't have any.
  3. The local admin can run as any domain account used for a service account or task scheduler processing. So our attacker now has access to some domain accounts. (You know how to do that too, without cracking the SC database password list). For workstations, where you know it's possible that an attacker may make themseleves admin, this means that you can't use any system that uses a domain account to run agent services. That was a surprise to me, but it's inescapable.
  4. Some applications require a console session to work to be permanently open. It is our solemn duty to mock the designers of these nightmares, but we have to accomodate them, and the right place is in the machine room. For workstations, the implication is that we can only allow processing that can be shut down.

Those last points makes the simplest definition of a server. It's a server if it does unattended processing a) under a domain service account or b) on the console.

The biggest surprise for me was the service account problem. It knocks out some agent-based management tools. Instead, we get to choose the trade-off:

  • Agentless tools pass (the same) admin credentials across the network for each machine it manages -- a terrible choice for network security
  • Agent-based has to use its own secure channel to report results --duplicating effort and potentially introducing obscure insecurity.