Developer Accidentally Deletes Production Database On Their First Day On The Job (qz.com) 418
An anonymous reader quotes Quartz:
"How screwed am I?" asked a recent user on Reddit, before sharing a mortifying story. On the first day as a junior software developer at a first salaried job out of college, his or her copy-and-paste error inadvertently erased all data from the company's production database. Posting under the heartbreaking handle cscareerthrowaway567, the user wrote, "The CTO told me to leave and never come back. He also informed me that apparently legal would need to get involved due to severity of the data loss. I basically offered and pleaded to let me help in someway to redeem my self and i was told that I 'completely fucked everything up.'"
The company's backups weren't working, according to the post, so the company is in big trouble now. Though Qz adds that "the court of public opinion is on the new guy's side. In a poll on the tech site the Register, less than 1% of 5,400 respondents thought the new developer should be fired. Forty-five percent thought the CTO should go."
The company's backups weren't working, according to the post, so the company is in big trouble now. Though Qz adds that "the court of public opinion is on the new guy's side. In a poll on the tech site the Register, less than 1% of 5,400 respondents thought the new developer should be fired. Forty-five percent thought the CTO should go."
How the fuck (Score:5, Insightful)
How the fuck does a new hire have that kind of access? that's not even enough time for on-boarding. The CTO should definitely get the shitcan, as should anyone in HR involved in that debacle.
Re:How the fuck (Score:5, Insightful)
The entire CXX staff level should be let go. Why a person fresh off the street had permission to even make a mistake of such magnitude is beyond me.
Re:How the fuck (Score:5, Insightful)
The entire CXX staff level should be let go.
First, "CXX" level is not "staff". Second, firing the entire senior management team because of this incident would be completely reckless. I understand the outrage but that's not how companies work in real life. The last thing a company in that situation needs is more instability.
The proper way to address this is to stabilize the situation, then make sure the problem cannot occur again. And this typically doesn't simply mean firing people, because odds are that there are cultural or organizational factors that made this situation possible (crazy deadlines, shoestring budgets, etc.), and those would probably lead the replacement to making the same kind of mistakes down the road.
What is needed is new processes and controls. You start with a simple governance framework (like COBIT maybe) where each part of the IT ecosystem is linked to a specific business leader, then you let each of those leaders make sure that their area of responsibility is well managed from a risk perspective. That's how you make the company more resilient, not by firing people who maybe were not empowered to make the right decisions in the first place.
Re: (Score:3)
Yes, but I think what people are getting at here is that current policies appear to be so fucked up that the team who implemented them may be unlikely to implement new processes and controls that are any better than they have already done.
I've seen that sort of stuff in a few places, (notably government owned corporations - worst of both worlds) where management have been chosen for reasons other than ability.
As shown by this incident that sort of mismanagement
Re: (Score:3)
What is needed is new processes and controls.
What is needed is a backup system that actually works, and is used.
I never trusted our official backup system, having seen it not work on several occasions. So I installed one of my own for the group. Sure enough, around a year later, a group member called in a total panic - she'd written a script that was supposed to perform a find then print the find results. What it managed to do was delete the whole database.
Calls me in a tearful freakout - the IT folks backup didn't work.
Mine did. There were a
Re: (Score:2)
The C-level executives are the people in charge of executing the company strategy as established by the board (who are representing the owners). If you remove them, you basically have a headless chicken. If you replace them all at once, then you have months of instability as they have to rebuild their teams and puts their systems in place.
That's why elections are staggered in the USA. Could you imagine the mess if everyone was elected at the same time?
Re: (Score:3)
I don't fail at civics. You fail at understanding what "at the same time" means.
Here's a hint.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: How the fuck (Score:4, Insightful)
Re: How the fuck (Score:5, Funny)
Hell, even on a one man business, development should not be done on the same machine as production activities.
Even my home machines have off-site table backups - and yes I have tested recovery works - cross platform - on different hardware and OS.
if your data is worth money, you should be prepared to spend money to protect it. If its not worth anything, then just delete it yourself and save the disk space.
Disclaimer: I do not have an MBA.
Re: (Score:3)
I'm a developer for Azure. The service I develop doesn't have a DBA or any test team. Could I screw things up? Yes, quite easily, however since the DBs are on XStore, we can just pick a date and time within the last week and snap the DB back to that state. I don't run a big enough or impactful enough service to warrant this level of support.
Re: How the fuck (Score:5, Insightful)
He was a developer not a systems administrator. There is NO reason why he should even be allowed access to production data, let alone live production data.
Welcome to DevOps. It's the fad sweeping the industry. Yes, it's exactly as stupid as you think it is. It's so obviously wrong, so mind-bogglingly stupid that only a manager with an MBA could possibly have thought it up. So of course it's the latest fad.
Fishy (Score:3)
Not only exactly what you said, but the fact that they have no backups (or they were not working) is also pretty scathing. I've worked with some experienced developers that know our systems for 15 years, and they never ever have any access to production! Never mind they should have DEV and TEST instances as well, probably logging depending on what they are using...
Makes me wonder if their IT was that inept, or if there was some serious financial or other more fundamental technical issues that they needed to
Re: (Score:3)
How the fuck does a new hire have that kind of access?
Having worked at some small businesses before it seems to me to be pretty common. The article said the business had a couple hundred people at most and 40+ developers. Quite likely the people there were there for a long time and they hired a handful of people since they now had a need for some help and finally some money to pay them. This is how things were done for years and not remembering what can happen if a newbie fucks up once in a while they thought nothing about handing over the documentation the
Re:How the fuck (Score:4, Insightful)
Not exactly a small business deluded sig guy.
Back Assward (Score:2)
Corporate cheapskatism fucking them in ass: newbies handle key data and no backup system in place.
Sue/fire the CEO, not the grad.
Re: (Score:2)
Ah, the memories (Score:2)
Actually reminds me of two fellows I worked for briefly. Both of them were actually too small to have a separate CTO. In the first case I decided to leave as soon as I figured out the legal liability their main customer had incurred due to pirated software. There were actually two packages, and one of them was a database. The second case was a total shoestring operation and one of the first things I discovered was that their so-called daily backup processes were not actually backing up anything.
Now that rem
Re: (Score:2)
Well, depending on the size of the company, it could be possible. Maybe not if you're a 1000 person enterprise, but if you're a 10-100 person SME, it's definitely possible, especially a startup.
However, the lack of backups is more damning - it means the entire company is one mistake away from losing it all. I don't care if it's a new guy - it could be the rockstar developer making a typo and deleting the entire database. It could be anyone.
Even though I work for a company of less than 100 people, no one oth
Re: (Score:2)
How the fuck does a new hire have that kind of access? that's not even enough time for on-boarding.
This has little to do with noob status as an employee, or even technical experience. The real question is why the fuck a developer has access to the Production system. We call the Non-Production environment Development for a fucking reason.
The CTO should definitely get the shitcan, as should anyone in HR involved in that debacle.
The CTO should get the shitcan for not ensuring backups were working, as well as not implementing proper security policy that prevents developers from fucking around in the Production system without assistance and a documented approval process.
Regarding HR, they're fuck
Re: (Score:2)
Everyone screws up now and then, most often small, sometimes big. There's sometimes a small screw-up where you accidentally reboot the wrong server. Once is just an oops. It's when people screw up frequently and tries to get away with it you should bury them somewhere safe or remove them. If they admit their mistake then it's better to work on salvaging the pieces and patch together the remains or recover from a backup.
Companies that don't have backups that they test - they are toast as it doesn't even have
Re: (Score:3, Interesting)
I'm always amazed when something like this happens, a lot of people's first reaction is 'who gets fired'? It's really not a productive attitude to take in a case like this.
I find it problematic that
- companies take an agile 'quick turnaround' approach to development without seeming to understand the risks, until they get bitten. This is an example.
- seems that whoever's managing the dev team should have had a break-in period/mentoring system in place to make sure new hires (especially newbies straight out o
Re:How the fuck (Score:5, Informative)
Additionally if the backups aren't working, that's something that could have saved the company - if the CTO had made sure that they worked properly.
I'd probably think twice before hiring the new guy as well over this, although at the same time I'd probably limit access to just what he needed to do his job. I'd also make sure I have proper backups.
But I'd avoid the CTO at all costs, at least as a CTO. (OR C- anything if this is how he operates.)
I reconfigured our backups recently. It's a huge pain in the ass because I had to wait for the end of the month, move everything to other storage, recreate new jobs from scratch, wait for them to run, copy to a second location, and perform automated validation checks in both locations, then do it all again (making sure version chaining worked), then actually go in and open up every single new backup, in both locations, to ensure they all worked, the correct passwords were used, etc.
But I know they fucking work.
Re:How the fuck (Score:5, Insightful)
If it wasn't a pain in the ass it wouldn't be called work.
Re: How the fuck (Score:5, Insightful)
Agreed. Has they said it was a month in then maybe it could be believed. But onlyâ day one you're only getting down around, here's your PC, here's your password, Joe will show you around the product later and explain what we do, the toilets are that way...
On day one, there'd be no production access. Nor access to source code. And if he did have that of the was access, and changes would be scrutinised.
So, yeah, it's a fake.
An interesting hypothetical scenario, though -- if it did happen, given the info available, then who's to blame?
The new dev? No: first day, fresh out of college, mistakes should be expected. He shouldn't have had that level of access, and his immediate superiors should have been keeping a close eye.
His immediate superiors? Possibly. How did they let someone new make such a big error? Why did they allow him to do anything at all?
The DBAs? No working backups on a production database? No transaction logs that could be rolled back? No DR solution in place? Basic stuff here that were all missed. So definitely some blame sits her.
CTO? Hard to say. Certainly policies should be in place to ensure this shouldn't happen, so why did it under their watch? Were the staff too overworked that they didn't have time to get the basics right?
To properly sign any blame, more information is needed. We don't have all the facts. To many questions remain unanswered.
There is a general human flaw that this story highlights, though, which is that the majority will sign blame based on too few facts. This comes up time and time again, both here and elsewhere. Take, for example, "Making A Murderer" -- how many petitioned to have Stephen released based on 10 hours of a TV show that really only showed one side of the story? There were more than 200 hours' of testimony to try to show all of the available facts for the jury to make a decision.
This is a similar case -- one reddit post describing things from the point of view of developer who was on his first day ever working. There's tonnes of missing information here.
Still, and interesting scenario to ponder as a thought experiment.
Re: (Score:2)
Re: (Score:3)
No _competent_ DBA is not the same as no DBA...
Re: How the fuck (Score:5, Informative)
I read the thread over on reddit and basically he turned up, was given his dev machine, and then given the instructions for creating the dev environment + db's & etc on his machine.
Then after a while he was like, "OK, let's see, step 17: Enter these commands to create your dev DB."
So he copied and pasted the commands from the document, like he'd been doing for the last 16 steps.
Unfortunately, the commands had the production db's connection in them. And they also had queries like, "drop database mainDB; create database mainDB; create table etc etc"
Whoopsie!
Re: How the fuck (Score:3)
How the heck did the copy/paste text include valid credentials to even access the production database?!?
Re: How the fuck (Score:4, Interesting)
I'll tell you how. (names below are fictional)
Netadmin or Sysadmin team has no content writer, so Prakash is told to create an induction manual for new hires. Prakash hates this assignment (understandably, he's a net admin, not a fucking content writer) so he does the bare minimum: takes screenshots of prod environments and enters credentials for that generic admin account everyone was using (because fuck processes) as example.
New hire comes, proceeds going through the document, and with his lacking attention to details (or because he's overzealous, or been told to beat the best time for new hires of 30 minutes 15 seconds) doesn't pay attention to the step saying "enter YOUR given connection details" and copy/pastes the ones shown as example, which incidentally are absolutely valid and point out to the PROD DB.
Disaster happens.
Moral of the story?
1. Get a proper content writer to create documentation: responsibility falls to management to ask for it and senior management to approve it. Blame falls on whoever didn't ask or didn't approve it.
2. Review the documentation: responsibility falls to line manager Prakash is under. Blame that line manager and hang him (figuratively) from a tall sturdy branch.
3. Publish the document as a controlled document in a knowledge management environment. If such an environment doesn't exist, blame everyone who didn't ask for it or approve it.
The CTO is hardly to blame, it's not his business to handle such processes, that should fall under a line manager or whichever dedicated person was supposed to handle it.
I personally wouldn't have kept a netadmin/sysadmin who can't follow basic instructions or a manager who didn't review the training document. Everyone else is off the hook because they were either not aware of the risk or did what they could with a task that wasn't in their job description.
Re: How the fuck (Score:5, Insightful)
But it is his job to ensure appropriate security and backups for the production database.
Re: (Score:3)
The fact that the behavior you're presenting seems normal to you scares me, because it shows how widely accepted this has become.
While I agree that the admin should start writing the draft, their job would end there. Technical people rarely can produce proper documentation, and whether the documentation is internal or customer-facing is irrelevant. Standards should be the same. the mindset you're presenting is similar to that of people writing in bad English online because "fuck it, it's not important, I ca
"Their backups weren't working." (Score:5, Insightful)
Okay, the guy fucked up ROYALLY.
It happens. And he SHOULD get in a bit of trouble for it. That's how you learn "don't do that". I don't think they deserve to lose their job though.
The CTO and all the people in charge of the backups need to be on the street YESTERDAY though. That the dev COULD do something like this is a major fuckup on their part. They simply didn't have their production system locked down properly.
The fact that their backup system was non-functional is double-plus unforgiveable. The dev is merely the highlight for their massive cluster-fuck of a setup.
Re:"Their backups weren't working." (Score:5, Informative)
Okay, the guy fucked up ROYALLY.
I don't think he did. I actually RTFA this time, and the guy was following the onboarding directions he was given. Where it went south was that he copied-and-pasted the wrong database credentials. He was supposed to use the username and password that a command had spit out, but he instead used the ones from the onboarding docs.
I'll pause for a moment to let that sink in.
Some jackass had put actual prod root creds in the onboarding docs, then gave them to a new graduate fresh on his first day of his first job, then walked away while he onboarded himself without supervision.
This poor kid did absolutely nothing wrong except misreading some instructions. The engineering team responsible for the chain of events that led to this colossal fuck are completely and wholly to blame.
Re: (Score:2)
He's a new guy. And the onus is on him for attention to detail.
However, he's a new guy. Fuckups are to be expected.
I've had fuckups at new jobs myself. This is how we LEARN.
And yes, in that environment he was set up to fail.
Re:"Their backups weren't working." (Score:5, Insightful)
But his mistake was easy to make and should have resulted in an "access denied" error message. If you give a five year old a hammer and say "don't whack shit with this", and they whack shit with it, you're the one done goofed.
Re: (Score:2)
How could someone in this position recover from it?
Suing the company for giving you in a potentially career ending position through no fault of your own might generate a one-off lump sum, but will be expensive and risky and likely make other companies not want to employ you in future.
Doing nothing and hoping they don't sue might be the best thing. Just keep quiet, never mention that you ever worked there in your job history and try to move on. If it brings the company down or they do sue and your name becom
Re: (Score:2)
You raise a lot of perfectly valid points that I'd expect the storyteller to have caught and reacted to. That is, if it weren't their first job out of school.
I'm senior, but I've taken jobs at places where I was given a few pages of setup instructions that could've been a shell script, and I had to work through the steps one at a time. I didn't know how their environments were arranged, or what systems called "cat-kicker" were supposed to do, or why on Earth their git repos were carved up like they were, bu
Re:"Their backups weren't working." (Score:5, Interesting)
The fact that their backup system was non-functional is double-plus unforgiveable.
In my experience, continuous SAN replication is often to blame for a poor backup strategy. It creates the illusion of security - yes, your DR site is synchronized with production within seconds or milliseconds, but guess what, mistakes are also replicated.
Replication -> floods, fire and similar disasters
Backups -> oops my bad
Both are needed.
Re: (Score:2)
That's the modern version of "we have RAID backup!"
et tu, RAID (Score:2)
RAID (especially those with parity) can be terrifying. Just think of it: you have a group of disks probably acquired at the same time and probably coming from the same vendor (or even same production batch) serving the same workload in the same environment. That implies a fairly similar MTTF for all the disks.
Then one of the disks fail; this causes the other disks in the array to first handle a higher load, then to be brutally impacted by the rebuild process. That's like playing Russian roulette with a gatt
Re: (Score:2)
RAID (especially those with parity) can be terrifying. Just think of it: you have a group of disks probably acquired at the same time and probably coming from the same vendor (or even same production batch) serving the same workload in the same environment. That implies a fairly similar MTTF for all the disks.
Then one of the disks fail; this causes the other disks in the array to first handle a higher load, then to be brutally impacted by the rebuild process. That's like playing Russian roulette with a gattling gun.
Yeah. I had a RAID-5 at home. When one disk starting failing I didn't notice because the system kept running, and being a home setup it didn't actually have any lights or warning unless I manually opened the RAID manager and checked. I noticed when a second disk failed this time completely for all sectors, during the recovery two more disks started failing.
Re: (Score:2)
About 15 years ago I was looking after an old Compaq ProLiant server that had a 6 disk SCSI raid array in some configuration I cn't recall.
Oh, a drive just failed with an "Exceeded power on hours" error? Well, that's ok, there's a hot spare in the array, no problem.
Next day, two other disks went offline, because they were all powered up at the same time when they were new, weren't they? And they were perfectly usable, it's just that the array controller noticed that their smart attributes had exceeded a thr
Re: (Score:2)
Yep. I have a hot standby for our production database server. Everything done on the primary is almost instantly duplicated to the standby. That would include fucking shit up, e.g. drop table foo; Which is why I make a weekly backup to disk and keep the entire WAL history until the next full backup. The backup and WALs are kept on a filer, which is also replicated to another filer.
Re: (Score:2)
I think the company is hopelessly lost. It's not only the CTO that screwed up but the CEO who hired him. At that point, you run out of people to fire, and the company just goes out of business.
He should thank his lucky stars that he found out so quickly what a poorly run company had hired him.
Re: (Score:2)
The entire company should be fired and cease to exist. You should also be fired, chazz.
Thanks! Now I can go home and catch up on my sleep! Later!
*Burnout noises*
Nothing new (Score:2)
If backups are not working and this is known... (Score:5, Insightful)
Re: (Score:2)
Yeah, they were doomed to fail without backups.
What if their server failed irreparably? What if some code went rogue and overwrote it? What if the server burns down because someone somewhere in the building left the stove on.
The CTO should be fired for total incompetency. You can have read-write access to database servers without having access to schema changes on the database. Personally, even though one of my creds actually gives me this access to some of our production databases, I never do it myself and
Re: (Score:2)
Yes, wat he did is inexcusable
Following the onboarding process on a sheet of paper that was given with him that some numbnuts decided should have the address and administrator credentials to the production database on it, on his first day, unsupervised?
What this guy did is 100% excusable. The CTO and whoever created the onboarding process should see themselves out.
I mean fuck I was on a visitors badge fully escorted for a whole week at my first job. That's right, they wouldn't even let me simply walk around the building unsupervised to
Allowing it to happen is wrong to begin with (Score:2)
Allowing a junior developer fresh out of college to log into production with privilege that makes even a minor c
Re: (Score:2)
So much this. It's a major PITA requiring permission from a director for me to get access to a machine that can access a production database. I am perfectly fine with this arrangement!
Re: (Score:2)
One thing that has NEVER changed is that developers are NOT allowed to touch production... and I mean not allowed to even log into any host at all...They don't even have a damn clue where the machines physically sit.
In my office, only one developer has limited production access, the senior guy. He's the only developer who can do code releases and he has RW access to the DB, but he can't mess with the system's configuration. If he's out of the office, I have to do the code releases as the senior system administrator.
Some Company Information Please (Score:5, Interesting)
I'm surprised that the firm has not been named - while I would think that any company that had this happen to them would want to keep this confidential, I would think that somebody would talk about it separately. I suspect that the "company" is some podunk startup in which the CTO is the CEO, CFO, head of development and probably the HR head and they've just hired a developer without thinking about access restrictions (or verifying that backups are actually happening).
Some more information would help clarify these questions and maybe better explain how such a situation could happen.
Re:Some Company Information Please (Score:4, Informative)
Re: (Score:2)
Sound like your run-of-the-mill startup that has "the great next thing" and in the process got more investor money to burn than is good for them, so starts hiring more people than they can properly manage.
Sounds like a poorly run IT system (Score:2)
Seriously, a day 1 dev has direct production access? Hell, any dev has direct production access? No QA, no release management, no integration or functional test suite if they're doing some sort of continuous deployment?
It's a pain in the ass, but if they've got any sort of actual real database, they'll have had a real database admin, running it with archive logs they can use to restore their data? ... plus their backups are gone?
What sort of fly-by-night operation is this?
Re: (Score:2)
I agree with that - different mindset. Devs want to get stuff done ASAP and usually don't seem to get the concept of multiuser systems. Even very experienced devs do shit like reboot servers in working hours leaving dozens of staff twiddling their thumbs and unable to work unless months of effort has been put into changing their attitude to production.
We need to have medals to give out. (Score:5, Funny)
I say if he succeeded in putting that company out of business, then he should get a medal for sacrificing himself to destroy the company.
My belief is when he saw on his first day, the badly written docs they handed him, with a printed (!) account/password having RW access, he instinctively threw himself on that grenade by destroying their production database. Only the most cowardly IT worker would have done otherwise.
Thank you, selfless IT worker from saving us from the horror of whatever product they were trying to produce.
I accidentally the whole database (Score:2)
be happy (Score:5, Insightful)
You don't want to work at a company where the backups don't work and where a new hire can accidentally delete all their data. Don't beg to stay, instead be happy that you found out quickly how incompetent that company actually is.
Re: (Score:3)
Wish I had a mod point for you, my friend. That is exactly, 100% correct. Rookies make mistakes...sometimes even stupid mistakes. It happens.
If a rookie can wreck a company this badly, it's hardcore proof that the problem is a long, long way up the food chain.
THAT is where heads should roll.
Back in the day.... (Score:3)
Re: (Score:3)
> You know, delete user "gerry" and also delete user's home dir "/usr/gerry".
Who puts a user home folder in /usr/? Isn't that bad practise in the first place? The usr directory is for system stuff.
This was inevitable (Score:2)
I mean, come on:
- No working backup
- Excessive access for the new person
- CTO is incompetent and cannot admit to mistake
This is an accident waiting to happen. And the new person has zero responsibility for it. Might be better off to be out of that fucked up company though.
Shit happens (Score:2)
Hey I had a security guy come into our data room to do a drive inventory and while waiting for me his curiosity got the best of him and he popped open a drive on 70 TB raid.
Fake. Made up story. (Score:2)
Multiple levels of fail (Score:2)
Lot's of blame to spread around.
The code used in production should have been reviewed by someone before execution in production. No exceptions. Especially because it's a new guy on his first day. The code should have been run in a staging environment first. How long was it known that the backup system was broken? This mistake was obviously not the newbies fault.
If my production DB backup was hosed, I would be dropping just about everything else to get it healthy again. A deleted database would mean so
Fire the CIO and CISO (Score:2)
If a junior developer can FUBAR the company, these are the two that fucked up royally. Very obviously processes are crap and gross negligence is running rampart.
Fire these two bozos. Out of a cannon.
Always test your disaster recovery plan! (Score:2)
At least know that your backups work.
In the mid-90s I had just started working at some place and the guy I was replacing (he left for a better opportunity) was showing me how everything was set up and as a demonstration he deleted his own account. I guess he felt he didn't need it anymore, but then he says there might be some useful stuff in there and tells me it would be a good exercise for me to learn how to restore from backup.
Not a big deal, it was actually documented and I had done that before at a p
I have seen so many unbacked up production DBs (Score:2)
I wouldn't be surprised to find that a good percentage would be very screwed in the long term.
This is not only their data but the system as a whole. When I am consulting at most companies that are retiring servers I often suggest that t
I did this too (Score:3)
I can relate to this. I wiped out the production database for our ERP system when I was trying to create a copy of it. Fortunately I had good backups and was able to restore the DB with minimal loses but it took all day (this was back in 1997 and the computer wasn't particularly fast, a SparcServer 1000 with 2 CPUs and 500 MB of RAM). In my case I wasn't fired and retired from the job last year after 31 years.
Not His Fault (Score:2)
This kid will have no problem getting a new job. What happened was not his fault in any way, shape, or form. The fault lies squarely with the CTO (in no particular order):
1) He allowed a new hire to have superuser, unsupervised access to the production database without an ounce of training.
2) He allowed an unvetted new-hire script to contain actual superuser credentials that could be used to wipe out the production database with a simple copy and paste error.
3) He allowed a new hire to run that script uns
Re: (Score:2)
By the way, I did something similar on my second day on the job (no, I didn't get fired). I'll skip the details, but yes the management fucked up and had a hard time coming to grips with their fuckup. Fortunately, circumstances in my case were slightly different; but it was close enough to this story that I don't blame the new hire one bit.
Let me guess (Score:3)
The company is named 'British Airways'?
Recent grad given full production access (Score:5, Insightful)
Re:Why Was He Mucking With It In The First Place? (Score:5, Informative)
Whoever gave a new hire a worksheet with a URL for the production server and login with full access should be the one fired. This new kid showed the company that their new hire setup is completely insecure and that their backup infrastructure doesn't work. It's not this kids fault.
Re: (Score:3, Insightful)
Having production databases that can be reached from developers workstations is always a bad idea. They were lucky, in a way, that the developer deleted the data and didn't just alter it slightly.
Firewalls, people, they're not just there to block wordpress exploit scanners. Danger can come from insiders, and that includes hostile employees as well as clueless morons.
Re:Why Was He Mucking With It In The First Place? (Score:4, Funny)
Having production databases that can be reached from developers workstations is always a bad idea.
Welcome to DevOps ;)
Re: (Score:2)
Welcome to startups. It's very good experience straight out of school to know just how screwed up startups are and to avoid them for the rest of the career.
Re: (Score:2)
We've had very different experiences. The startups I've been at have been packed with smart, competent, diligent coworkers. I've had much worse luck with large corporations where employment was effective for-life and it was so easy to blameshift that no one ever got fired for anything.
Re: (Score:2)
Yes, but with startups you can have brilliant people, rarely, but almost always there's little coordination as procedures haven't been set up yet. Plus a constant non-ending rush to meet short deadlines, with demos needing to be done often and no time to slow down and take stock of what's going on.
Re: (Score:3)
Re: (Score:3)
I worked as a DevOp quite often recent years.
As a DevOps I never had access to prod or preprod.
You must have a wiered idea what a DevOp is actually doing.
Re:Why Was He Mucking With It In The First Place? (Score:4, Funny)
Having production databases that can be reached from developers workstations is always a bad idea. They were lucky, in a way, that the developer deleted the data and didn't just alter it slightly.
You'd also have to pray they did not alter it any further.
Re: (Score:2)
Having production databases that can be reached from developers workstations is always a bad idea.
I don't disagree but sometimes it's impractical with smaller teams especially if that team consists of only one person and they are also expected to provide support for production systems.
It's just VERY important to make sure you're aware of which system you're on.
The worst I ever did was delete all the inventory in an entire aisle in one of our customer's warehouses. Thank God I didn't delete the entire warehouse. I felt really bad as I called up the warehouse manager and told him about it.
He took it sur
Re: (Score:2)
Eventually they split us up and took away the developer's root privileges from all the machines forcing us to use sudo instead...
sudo bash
Problem solved!
Re: (Score:2)
Not to mention - the very premise of just copying production database for use in testing is already terrible.
In theory, yes. But honestly, I've seen countless situations where there was no realistic alternatives given the available resources, especially if the volume or distribution of data could have an impact on the system. For instance, testing a new address autocomplete feature for an order form; if you only have a few hundred clients, maybe you can generate some decent test data, but if you have millions of customers it can be very difficult to make sure that the feature will work with the real data set. Same
Ya no shit (Score:2)
When we bring someone on, they do NOT get root/admin to critical servers their first day. They have to be off probation first, which is 6 months where I work. Even then, credentials for things are not on a document. That is just asking for them to get lost or stolen. They are given on a wallet sized card, written specially for that person, and they are instructed to keep them safe until memorized.
The reason is, of course, to prevent fuckups, as well as to make sure we trust them fully. The idea of giving so
Re: (Score:2)
Re: (Score:3)
just like the "supposed" rm -rf / last year ...
Something similar happened at Columbia Internet a while ago...
http://ars.userfriendly.org/ca... [userfriendly.org]
Re: (Score:3)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
There was a script in Steam for Linux that did just that and wiped out some home directories when the variable managed to be empty. I didn't hear about it happening to anyone on their first day of work.
Re: (Score:2)
This is what the anonymous poster actually said:
The CTO told me to leave and never come back. He also informed me that apparently legal would need to get involved due to severity of the data loss. I basically offered and pleaded to let me help in someway to redeem my self and i was told that i "completely fucked everything up".
It's not clear that the CTO was threatening legal action against the developer. For all we know, it could simply mean that the company would have to issue some kind of statement to shareholders or to customers based on some contractual commitment.
There's nothing in the entire story that clearly indicates that the CTO or HR acted improperly. Is it normal to have production credentials in documentation handed out to junior developers? Of course not. But the deve
Re: Old News... (Score:5, Insightful)
Retired boss, here. If a junior dev can push to prod AND can delete data, it isn't their fault. Okay... I am still going to try to salvage my hire and see if they fit in QA, or see if the debs will stop fucking with him. Seriously, not his fault. Nobody should be able to push to prod, withou someone signing off. Ask me how I fucking know. I'm not even a programmer. I just paid a lot of you and shut the hell up and listened.
So, if I am wrong then they were wrong. You don't push code to prod without someone signing off. The person signing off has ultimate authority. You sure as fuck don't let a junior do it, without oversight. Not now, not ever.
If this happened in my shop, some titles would have been changed.
Re: Old News... (Score:5, Interesting)
I gotta stop trying to type on a tablet.
Anyhow, I'd not want to fire anyone, without more information at hand. Firing people has large costs associated with it.
How to describe this?
As stated, I am retired. I've been retired for over ten years. Back in my day, we trained people and paid for them to continue their education. Don't laugh, that really used to happen.
So, I'd probably try to salvage this as a teaching moment. I'd also be changing someone else to the lead position. If I were to fire anyone, it might be the person who enabled this to happen and probably wouldn't be the junior.
Our shop was kinda laid back. It is unlikely that I'd fire anyone, but someone is going to catch a whole ration of shit. If this is a habit, then someone is getting fired, but if this is a habit then it speaks to a larger problem.
I'd start with the junior dev and work my way up. I'd keep digging until I found the problem in the process. I'd then take time to listen and find out what we need to do to make sure it never happens again. There's gonna be a meeting. Hell, there are going to be several meetings. Chances are, this is going to have a bunch of little teaching moments.
Junior devs, autocorrect hates that word and thinks it should be debs, sure as hell shouldn't be able to push to production. Even if they could, those backups damned well better be timely and functional. The IT staff, I guess you call them ops today, were great at their jobs, thankfully. For reasons, we had a pretty robust backup methodology. I am still flummoxed when I hear stories about not backing up properly.
I haven't done this for a while, so it is time to do it again.
I used a computer because I had to. I actually kinda hated them, for the longest time. I am not a programmer, but I programmed because I had a task that needed to get done. I'm not an admin, but I did that job because it also needed to get done.
Then, I was able to hire capable people. I was able to hire competent developers and IT staff. It was those people who did much of the heavy lifting. I learned a lot from them. I learned what best practices were and why they were that way.
So, as I said, I've not taken the time to do so lately. Here's a tip of the hat, and a nod, to those folks in the admin side and in the developer side. Here's a tip of the hat to the database admins and to the jerks who secured my network. Here is a tip of the hat to the programmers and to the QA. Here's a tip of the hat to those who spent long hours beside me, enabling me and teaching me.
Re: Old News... (Score:3)
I owned the company. We were also pretty relaxed. I'm not sure we'd have used those titles, but I'm pretty sure someone is getting demoted - and it's probably not going to be the junior developer. Hell, I might have taken Mr. Junior Dev out to eat, or given them a small bonus - for having discovered the glaring hole in our process that enabled them to do this and for demonstrating that our backups were worse than useless.
Re: (Score:3)
Re: (Score:2)
If you think hiring a professional is expensive, just wait until you hire an amateur.