Let’s take an example.
We know that searching stuff on Google got worse, but imagine if AI replaced it completely. Searching the web would be something like making prompts to a chatbot, a complete black box of information. AI could make sure that you don’t get conflicting views on state policies or acess to copyrighted materials…
Yes, you’re right about restricted content from Google and other search companies; but the point that I was trying to make is that if we rely on AI as a source of information, it will become more and more difficult to obtain the primary font of that information.
There’s another side to that too: AI can “poison the well”, that is, create 24/7 misinformation and spread it on the web so that searching becomes unpractical, and then the AI can be sold as the answer to that problem.
I mean, companies are putting a ton of money in this AI hype, it’s almost "too big to fail ". These same companies will begin to destroy and create problems in our current infrastructure so that they can sell the solution.
I take your point. It’s just that any scenario you’re describing with so-called AI could have been done by a search engine already. The slop of yesteryear was SEO ranking articles and fake links to make the algorithm prioritize your site over others. Well poisoning is how PR agencies get troublesome celebs out of the headlines again. The list goes on.
I share your concerns about the black boxed nature of so-called AI and by extension their search engines. I’m not saying it isn’t a problem; it’s just not a new one. Up until now we have had companies in charge with a vested interest not to bend the flow of information too far from, let’s call it, the median truth. Now companies are letting models make these decisions and some humans afford these models more credibility than their common sense and that is all worrying to say the least. So I’m a worried as you are, it just started earlier for me.