Thanks for joining us! We write about sports, food, life and anything else interesting here in Ashburn and Loudoun County, all while cramming as many features into the site as possible.
Our staff consists of one old man and a dog named Maggie The WonderBeagle. Want to know more? Click on the icon below:
Following Mark Zuckerberg the last few days testifying here in D.C. has been entertaining to say the least. A lot of it is just political theater, but there have been moments that make you think this is all the plot of some bad, bizarre science fiction novel.
Take for example, these three situations where Zuckerberg struggles to give any sort of a direct answer (which generally means “I know the answer but I don’t want to tell you”):
But the part that really made me think of Zuckerberg as more like “Dave” in 2001: A Space Odyssey, was his insistence on Artificial Intelligence saving Facebook and the world. Indeed, there is actually a headline in The Washington Post this morning that says “Zuckerberg says AI will solve Facebook’s problems.”
Which is kind of frightening.
Artificial Intelligence is nothing new. It can be looked at much like you’d perceive an actuarial table on steroids. An AI program gathers as many facts as possible about a particular subject, then with the help of an algorithm, uses all the past factors as a way to predict a present or future situation. Us old-timers call it “experience,” but AI systems don’t sleep, eat French fries, or have senior moments.
In non-subjective areas, I can see how it would be invaluable. But in subjective areas, it can have limitations. And quickly yesterday, someone by the name of Jon Stewart Mill on Twitter presented a thread (he also links to this article to back up his thoughts) showing just how wrong all of that could go:
So what he’s saying is if you analyze all the facts and they don’t say what you want it to say, some might change the formula until it does. Is that possible? Certainly. Could the AI Zuckerberg develops possibly have a particular bias so a preferred message shows up on everyone’s timelines and ones he disagrees with don’t? I don’t think there’s any doubt about that. Could other media outlets use the same program if it were offered? Who knows?
All of which makes me sad. I was a journalist for a decade out of college and strongly believe in it. I’m not a fan of The Washington Post slowly abandoning what used to be impartial journalism in favor of more advocacy of a particularly school of thought, but I am still a 7-day-a-week subscriber. The Loudoun Times-Mirror is delivered for free into my driveway every week, but if they charged, I’d pay for that too. Society needs newspapers and journalism that investigate stories that are swept under the rug, and whether it’s online or a dead-tree product, it needs to be supported.
But if publications or products try to control the message too much, it won’t change people’s minds; it will merely make many doubt the institution. Then when an editorial product does find the answer to something totally unbelievable, it will be the worst of all worlds; the truth was revealed, but nobody believed it.
So, Mr. Zuckerberg, put that AI product on the back burner. Focus instead on regaining your company’s credibility, because there are actually a lot of us who enjoy seeing pictures of our friends and reading about their lives. Without having the details of those lives sold on the open market.
Doing that wouldn’t be artificial. It would be a sign of REAL intelligence.
A longtime sports fanatic, Ricky is now channeling that passion into the world of sports media. Meet Ricky LaBlue.
The only things he loves more than following Virginia Tech and Washington sports teams are dogs. Meet Stephen Newman.
The site is a labor of love, so we don't expect any help. But if you absolutely insist, here's how you could do it...Just click here.
Comments