Facebook recently stirred controversy after reports came out that they manipulated user timelines to see the effects of people receiving positive or negative content in their feeds. The GPS company TomTom stepped in a puddle when their traffic data ended up in the hands of police looking for speeders. Target incorrectly presumed female consumers were pregnant after a failed attempt at predictive modeling. And before an update, Siri would direct you to the nearest bridge if you asked to jump off one.All of these instances involve morally questionable uses of data, in which people’s privacy was violated or conclusions were drawn that led to invasive or poor decisions and involvement on the company’s behalf.And yet for each of these instances, the genesis of the idea and what could be accomplished from data analysis was probably first seen as “cool”. Did anyone else in the room upfront question if it was creepy?“In the absence of an ethical framework in talking about business decisions, we revert back to our moral code,” says Kord Davis, author of the book “Ethics of Big Data.” This is where we are with Big Data, stuck in a lawless Wild West in which the technology is ethically neutral but everything that’s done with it is volatile.