Humans as data gatherers for AI (General)

by dan, Friday, April 21, 2023, 11:45 (374 days ago) @ dulan drift

I've been following this explosion in attention the media is giving AI, and I'd say that attention is granted. It is resulting from an exploding level and variety of access that us normal folk have to AI tool like chat-gpt, etc., not unlike how access to the Internet, then the WWW rolled out.

Soon, I'd say faster than it took for the Internet to invade our homes, this will be truly ubiquitous. Of course, it is already, I guess I mean people will be consciously using it on a daily basis very soon, surely within a year or two. It took the Internet nearly a decade to be a household tool.

I've been playing around with chat-gpt, Google's Bard, and a few other things over the last week or so, and this is indeed a very big deal, and you are absolutely correct that it is feeding on our data; we are feeding it not only without our knowledge, but also without our permission.

And at what point did the whole question of copyright and plagiarism simply become a non-issue? If I ask chat-gpt to write a 1,000 word essay on the history of baseball, it is plagiarizing, it's that simple. It's not creating anything. Odd how if you or I were to create that exact same essay word for word, we'd get called out for not citing sources, etc. Not an issue with AI!

The WSJ posted a story that included 25 ethical questions regarding AI, and they were standard, but good. It's behind a paywall but I was able to access it at work. A couple of the questions are related to the scenario you present. I'll post them below. To extrapolate, if AI is better at completing a task related to safety and human life than human's are, should it be required to complete those tasks, rather than humans? (i.e., driving, surgery, etc.).

It's not an unlikely situation. We're required to wear seatbelts, after all, because they save lives. The same logic could be used to require AI to drive our cars, and forbid us from driving them.

Now, let's extrapolate more (and this gets to the morals question) if AI is better at judging which political candidate should be elected, could it someday be given that task?

You can see how this is a slippery slope.

[image]

[image]

[image]


Complete thread:

 RSS Feed of thread