Zero tolerance has become an overused, catch-all phrase that many times doesn’t work to stop legitimate bad behavior. That’s because most times there is an inherent need for a sliding scale, or some grey area that needs to be addressed then weighed in many issues before a legitimate sensible judgment or penalty can be set.
No matter how much we want to form the argument around “zero” it just never seems to work out that way. However, unethical business practices I believe is one of the few than can withstand the test and scrutiny of a zero tolerance.
Again what allows this issue to get so murky by both the business culture as well as most others is the illusion or belief they concoct when they assume an unethical behavior or practice does have some form or ability to be on a sliding scale. It doesn’t.
Any unethical act is just that – unethical.
The magnitude of the behavior may be a legitimate argument as to define punishment. e.g., Knowing a supermarket scanner is overcharging a certain canned good by 1 penny, but can’t be addressed till after hours, yet nobody says anything in hopes no one will notice.
Then there’s the scanner set to deliberately overcharge every can of a certain label by 1 penny. Both are unethical. Period, and should be dealt with accordingly. The amount has nothing to do with it. The only amount to be argued is the size of the fine, or severity of the punishment (i.e., are they going to jail) is up for discussion.
It’s the willful knowing and allowing of the act where the rubber hits the road. Again: Not the dollar amount. Far too many allow dollar amounts or people effected to be the signal. No: It’s the act that is the signal.
So when I read about the latest brouhaha resulting around Facebook™ (FB) and their alleged “psyche experiment” on unsuspecting users, I was a little taken back not just by the defensive posture of Facebook’s management, but also in many reporting on the story. One would think this was something inconsequential. I can not disagree more, nor do I believe this issue is over.
Here’s the difference in my view with this whole episode. I know I’m an outlier in my thinking, but someones has to be the adult here.
What I know from what has been reported is that Facebook intentionally altered some users/customers data stream in such a way to determine if the manipulated feed to the user made them feel better or worse. Then evaluated that data to which they would then in turn use what they learned on even more to their liking. Use does not rule out possibly sell to other big data buyers in my book.
And here’s the kicker: All without a users knowledge or consent. I’m sorry, but you don’t get permission to psychologically manipulate targeted and selected people for you corporate benefit without consent. Period. Especially if the intent is to produce mood swings in any shape or form.
Why is this so? Because it’s not only unethical – it’s dangerous to the individuals involved. And besides, where is the data of how many trained psychologists looked over this data and derived the findings as good or bad or inconsequential? Or were these determinations of mood done by plain ole management? If true, I take issue there also pushing my outrage meter even more so.
Remember the old stories when subliminal advertising was being used in and on theater patrons years ago? That was when a text or picture could be inserted within the featured film showing popcorn and such. The patrons didn’t understand why they had a craving, but they just did.
The momentary faster than an eye blink where the eye didn’t recognize but the overall brain caught the image was what caused the craving. What was at issue? Not the actual imagery or process, but it was the customers not knowing they were being subjected to it. That is the ethical difference.
Some may equate this with pumping a room with the smell of fresh popcorn in a way to entice as the same. It’s not. A person can understand what’s happening and may choose not to be swayed. It’s in the ability to know is where that ethical bar lays.
So again, why is this Facebook incident such a line crossing event? Easy: What if the data feeds or what ever they were that FB manipulated were given to people that might have been “on the edge” in dealing with a personal crisis or other?
All I’ve seen in the reporting of this and with the supplied responses is the term “user.” The user this, the user that, as if the so called “user” is some inanimate object.
What if one of these “users” was a 15 year old school kid parading as an adult with faked credentials? What if this “user” had psychological issues where hard core psychological drugs such as Prozac™ or others were prescribed and being taken? What if that user was suffering from episodes of bi-polar disorders and that manipulated news or data feed was just enough to send them over the edge doing something horrible?
What if it were a returning veteran or firefighter, or cop, suffering with a bout of PTSD or _________(fill in the blank). Do we know? Has it been revealed who these exact people were?
These people are not just some test subjects to be dealt with nor the results discarded or discounted as if applying the moniker of “users” makes all better or easier for a company or persons to deal with; while not appreciating the gravity and potential harmful manipulations possibly perpetrated on these unsuspecting people.
And if I were a someone with the legal intellectual prowess of Alan Dershowitz I would be drawing up a case as to find out exactly who and to what detriment if any was caused. Period.
We’ve lost a lot of words to where the meaning of “is” is now questioned and more. However, as I know of it today: unethical is still just that – unethical.
I don’t believe for one second this incident is anything close to being over with as now some rear mirror event. Personally I believe it maybe just getting started for it it takes a little more time than most other issues to work it through because ethics in business has been so adulterated, by so many, you get a little numb.
As for the likes of Facebook and others who stand with their thumbs up as to show the world just how smart, and just how much they can now do with the manipulations in algorithms, I’m reminded of that great scene in Jurassic Park (1993 Universal Pictures) between John Hammond (played by Richard Attenborough) and Dr. Ian Malcolm (played by Jeff Goldblum)
John Hammond: I don’t think you’re giving us our due credit. Our scientists have done things which nobody’s ever done before…
Dr. Ian Malcolm: Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.
It didn’t end well for the “coders” there either if I remember correctly. But don’t worry they’ve still got “Yellen Capital Management.” So who needs ethics?
© 2014 Mark St.Cyr