The future with artificial intelligence?

Talk about anything and everything.

How do you feel about these fast speed technology improvements?

I don't really care
1
13%
I'm concerned about this progress
2
25%
Can't be fast enough!
5
63%
Where can I buy a ticket to mars?
0
No votes
I only wonder if aliens also have same sex relationships
0
No votes
 
Total votes : 8

The future with artificial intelligence?

Unread postby xeta » 27 July 2020, 06:51

I am a true technology enthusiast. It can't go fast enough for me. With the worldwide emerging 5G network, artificial intelligence and the Internet of Things, many people are also afraid of these developments.

Are you familiar with this? What do you think of this? Do you think this is an improvement or are we going too far in our technology? Here in Bangkok there are many large LED screens with the text 'Prepare for the Future' and '5G is coming for you'. Pretty scary slogans.

If you are not familiar with this, this is an interesting link. It is in Thai, but you can change the language to English at the top right.

https://www.ais.co.th/5g/

Image
Acceptance doesn't mean resignation; it means understanding that something is what it is and that there's got to be a way through it.
User avatar
xeta
 
Posts: 68
+1s received: 48
Joined: 9 August 2019, 06:06
Location: Bangkok
Country: Thailand (th)

Re: The future with artificial intelligence?

Unread postby Brenden » 27 July 2020, 06:57

I am both concerned and excited. I fear that the people who are currently writing the code and building the machines are not giving due consideration to the ethics and possible consequences; they’re just pushing ahead and kicking the bucket down the road.
Disclaimer: All views expressed in my posts are my own and do not reflect the views of this forum except when otherwise stated or this signature is not present.

ImageImageImageImage
User avatar
Brenden
Administrator
 
Posts: 8090
+1s received: 2802
Joined: 20 December 2012, 20:12
Location: Maryland, USA / Lanarkshire, Scotland
Country: United States (us)

Re: The future with artificial intelligence?

Unread postby xeta » 27 July 2020, 07:08

Brenden wrote:I am both concerned and excited. I fear that the people who are currently writing the code and building the machines are not giving due consideration to the ethics and possible consequences; they’re just pushing ahead and kicking the bucket down the road.



I agree with you, Brendan. I am enthusiastic but at the same time I know it can cause problems. The technology is many times further than they let you know. I think it can get quite dangerous when machines can teach themselves new things and think for themselves.

On the other hand, the making of artificial organs and body parts does progress. Especially for people with a weak heart, people who are blind and so on.
Acceptance doesn't mean resignation; it means understanding that something is what it is and that there's got to be a way through it.
User avatar
xeta
 
Posts: 68
+1s received: 48
Joined: 9 August 2019, 06:06
Location: Bangkok
Country: Thailand (th)

Re: The future with artificial intelligence?

Unread postby Valso » 27 July 2020, 10:23

I voted "I don't really care" because there aren't many other suitable options. And to some extent I don't care, for as long as the programmers doublecheck and tripplecheck their code. Why? Because:

Image
User avatar
Valso
 
Posts: 462
+1s received: 110
Joined: 14 December 2017, 17:00
Location: Bulgaria
Country: Bulgaria (bg)

Re: The future with artificial intelligence?

Unread postby PopTart » 27 July 2020, 12:36

I didn't vote, as the option I'd have like to see wasn't present. I am very cautiously optimistic.

There are currently some very real dangers associated with AI and I'm not just refering to it's becoming sentient and deciding to "Kill all humans!!!"

As Brenden alluded to, many technologist and futurists are consumed with the idea of creating artificical inteliegence because they want to prove that they can and dream of all the good it could do, but many don't really give due consideration to the dangers inherent in doing so, both social, economical and existential.

Just a few years ago, the idea of genuine artifical intelliegence, remained science fiction or atleast, sceience fancy. Many experts spoke of intelligent computing systems, that would exhibit behaviours that mimiced intelliegence but wasn't true intelligence.

But recent advancements in both technology and in thinking around how artifical intelligence is programmed, have changed that outlook entirely.

For the longest time, people have been trying to program the immense complexity of tasks, that an artificial intelligence would need to operate and it never worked. Engineers just haven't been able to write code, stable enough, and flexible enough, to mimc organic thinking and reasoning.

But that has changed. Coders and engineers no longer write instructions that programs follow. that proved to be too monumental task, when building AI, perhaps, not surprisingly, who can be excpected to write a fully actualised intelligence, in any language, when we struggle to define even our own intelligence in the languages we know best.

Instead, we now write a code, that learns in an organic manner. Not only do we have less control over what AI learns, sometimes, even the engineers who design an AI system, don't understand how said AI makes the connections that they do. We have now, instances of artifical intelligences, formulating conclusions and we don't know how they arrive at them and no engineer can explain to you how it reasoned through the data it recieved and arrived at the conclusions it does. That is a concern.

Not least from the perspective of individuals and societies, that seek to utilise powerful artificial intelligence technology in our lives, but to society as a whole.

If we can't understand machine thinking, we can't accurately and reliably predict it's behaviour and suddenl;y, it's utlity becomes questionable.

We could create an Ai designed to maximise national funds and it could behave and perform quite well, clearing national debt, adjusting taxation on the fly, but it could also decide, for reasons inscrutable to us, that it needs to crash the stock market... we wouldn't comprehand why, it might not even have a good reason for doing so, it could be faulty machine learning behaviour, but we could never say, because we can't accurately analyse it's many layered and complex computations. Neural networks are not something we can unpack and analyse lines of code for. They are far more organic, far more complex and ultimately, far more alien than we might have previously imagined.

This is without getting into how we learn to communicate and understand, intelligence that thinks in completely different ways to human experience and perception. We struggle to coexist with other people who are 99.9% like ourselves, how will we deal with intelliegence that is so radically different?

What of practicalities, such as the almost certain explosion in machine intelligence once it really gets off the ground? While it may seem like science nightmare, it's becoming increasingly clear, that machine intelligence isn't bound and restricted by organic limitations. AI will be able to outthink us. You might think myu issue is that I worry we will be overcome by an "rogue AI" but this carries with it other concerns.

When AI are able to solve the scientific and intellectual challenges we face, for us, what incentive is there, for human beings to push forward and overcome, for ourselves?

In the wild there are examples of primates specie who, during lean times are forced to learn how to use tools to obtain food. But with the introduction of food abundance, those primate communities completely abandon tool use and within a generation or two, the skill is lost and they DON'T relearn tool use during further lean times.

So too, could we find ourselves, no longer having to find answers for ourselves. Our machine servitors learn for us and with each generation, the need to learn, even the means of study itself, falls by the way side. Sure, this is more of an existential threat, but I see it being a real one.

All that said, there are literally millions of world defining changes, of a positive nature that could come from AI. The kinds of changes that could make the difference between the stone age and the digital age, seem as nothing.

The sheer power that the technology has to fundamentally change the world we live in, I wish gave the matter greater concern and made pioneers in the field, take a moment to consider more, before proceeding, but the reality is, that the cat is out of the bag and there is no putting it back in. Or pandora box, depending on how you look at it.

It's such a far reaching matter, that I don't know if we are able to tackle it properly in all honesty.

But I hope for the best, because, what else can we do?

EDIT there are a ton of typos here, I know, but my keyboard and I, are literally on the other side of the room from the monitor right now and I'm struggling to see so as to fix them so you'll have to excuse me if any of this is unintelligable) :shrug:
ImageImage
User avatar
PopTart
 
Posts: 2917
+1s received: 2379
Joined: 12 December 2017, 11:15
Country: United Kingdom (gb)

Re: The future with artificial intelligence?

Unread postby GaySpacePirateKing » 29 July 2020, 17:11

I would have thought that we must still be far away from making a machine mind that is indistinguishable from a human, although I really don't know anything about AI. It sounds to me like such a difficult thing to achieve though if its even possible, on par maybe with space exploration or terraforming mars. It would be like making an artificial brain and creating consciousness from that, but is there not still a lot that we don't know about both? What if all our code did was create a really good simulation of a conscious being not actually aware but programmed to act like it? I should probably also add that I don't believe in a soul or anything and I think consciousness comes from matter, but I do think it would be difficult to artificially create.
GaySpacePirateKing
 
Posts: 235
+1s received: 86
Joined: 14 January 2019, 18:53

Re: The future with artificial intelligence?

Unread postby PopTart » 29 July 2020, 19:06

GaySpacePirateKing wrote:I would have thought that we must still be far away from making a machine mind that is indistinguishable from a human, although I really don't know anything about AI. It sounds to me like such a difficult thing to achieve though if its even possible, on par maybe with space exploration or terraforming mars.

I hate to tell you, but the space exploration is well underway!

People I think, tend to underestimate, how impressive our knowledge of our solar system really is, how active the interest in exploiting our solar systems resources is becoming and how obtainable those targets are. We now see two major private enterprises developing serious space technology infrastructure, in direct competition with one another, spaceX and Blue origin, not to mention several other serious contenders.

Terraforming Mars on the other hand, I doubt will ever be viable. It could be done, but at such extreme cost and there are far better prospects in our solar system.

But as to AI, I honestly believe we will see true AI long before we see serious development of space.

GaySpacePirateKing wrote:What if all our code did was create a really good simulation of a conscious being not actually aware but programmed to act like it?
It's as I said above, so you're right, we have hit a brick wall coding software to be AI level Intelligent, so we aren't using that approach anymore. Instead, engineers are developing self learning software, that can literally teach itself. Or write it's own code. We're talking multiple layers of self learning, neural networks, which mimic organic neurons and synapses. It's called deep learning.

Not only does this new generation of AI, learn for itself, it writes it's own code and can even create it's own shorthand for more efficient self programing. It is able to apply learning to it's past experiences in a process called back-propogation, not only can the AI write it's own code, it can proliferate code improvements throughout it's already present code. So we can't analysis it. We can't see "what makes it tick" so to speak. And even if we could, what we know of that AI today, may not be true tomorrow or even, ten minutes from now.

I would recommend anyone really interested to check out deep learning, reinforcement learning and the like.
This is a great website for further reading. https://pathmind.com/wiki/ai-vs-machine ... p-learning

Also, it's worth mentioning, that AI, doesn't have to walk around and talk in a human shaped body to be human level intelligent, infact, it's better for AI, if it isn't so restricted.

GaySpacePirateKing wrote:It would be like making an artificial brain and creating consciousness from that
I admit, we are a far way off building a brain, in the traditional sense, but we are pretty close to being able to emulate one. The estimated range in bytes of a human intelligence, is between 10 terabytes and 2.5 petabytes. I'd say it's closer to 1 petabyte, but regardless. Right now, Sheffield University offers individual research groups, a 10 Terabyte Research storage space for data. That's one university, in one city.

I always work on the assumption, that there are much larger storage mediums available, outside of the public space, that we wont see for a few years yet. But even if a human mind is 1 PB in size, sooner or later, it will be possible to either copy, upload or emulate a human consciousnes. I'm not sying it will be perfect or even possible in the manner we might be thinking of, but the reality it, we are reaching a point, at which we could concievably store a humans sum total knowledge, experience, thoughts and feelings, in terms of space, at the very least.

GaySpacePirateKing wrote:What if all our code did was create a really good simulation of a conscious being not actually aware but programmed to act like it? I should probably also add that I don't believe in a soul or anything and I think consciousness comes from matter, but I do think it would be difficult to artificially create.
But if conciousness comes from matter, then surely that doesn't preclude artificial intelligence? It would simply mean then, that Artificial intelligence, was stored on a different form of matter.
Or do you mean, that conciousness is inherent to all matter, regardless of what shape it takes and that it is only in certain shapes it takes on the characteristics that we deem life?

If it's tied to intelligence, the capacity to think, compute, and that ability being housed in matter, that gives rise to conciousness, then any and all AI would by your definition, be in possession of conciousness? Sorry that was abit off topic. :P
ImageImage
User avatar
PopTart
 
Posts: 2917
+1s received: 2379
Joined: 12 December 2017, 11:15
Country: United Kingdom (gb)

Re: The future with artificial intelligence?

Unread postby GaySpacePirateKing » 29 July 2020, 22:03

PopTart wrote:I hate to tell you, but the space exploration is well underway


All we've done though is send probes and rovers out, examine the universe through telescopes and talk about mining other celestial bodies. I was thinking more about going out into space and establishing permanent human presence beyond Earth and the solar system. I agree with you anyway though that we are more likely to develop true AI before any of that.

PopTart wrote:It's as I said above, so you're right, we have hit a brick wall coding software to be AI level Intelligent, so we aren't using that approach anymore. Instead, engineers are developing self learning software, that can literally teach itself. Or write it's own code. We're talking multiple layers of self learning, neural networks, which mimic organic neurons and synapses. It's called deep learning.


Where is the worry with it though?

Are you worried about AI over time becoming more and more intelligent until it becomes human like, then decides to kill us all? That seems pretty far fetched I think, and AI might never develop sentience on its own just by writing its own code and programmes. It might get it wrong.

Or is the worry more to do with machines that don't have any sense of morality writing their own codes and programs and then coding something unethical into their programming? Surely if there was any risk of that though we would have the sense not to have the machines making complicated decisions for us or handling complicated systems that could result in life or death.

PopTart wrote:Also, it's worth mentioning, that AI, doesn't have to walk around and talk in a human shaped body to be human level intelligent, infact, it's better for AI, if it isn't so restricted.


:( I had my hopes up for robot dick!

But yeah I agree. I've read some hard sci-fi like Iain M Banks stuff where AI doesn't really have a body or can use avatars or operate multiple drones at once.

PopTart wrote:I'm not saying it will be perfect or even possible in the manner we might be thinking of, but the reality it, we are reaching a point, at which we could concievably store a humans sum total knowledge, experience, thoughts and feelings, in terms of space, at the very least.


Thats honestly really pretty cool. If we can store that much data maybe we could create whole virtual reality worlds?

GaySpacePirateKing wrote:But if conciousness comes from matter, then surely that doesn't preclude artificial intelligence? It would simply mean then, that Artificial intelligence, was stored on a different form of matter. Or do you mean, that conciousness is inherent to all matter, regardless of what shape it takes and that it is only in certain shapes it takes on the characteristics that we deem life?

If it's tied to intelligence, the capacity to think, compute, and that ability being housed in matter, that gives rise to conciousness, then any and all AI would by your definition, be in possession of conciousness? Sorry that was abit off topic. :P


Sounds like I agree with you here. I don't think true AI is impossible, and in fact since we are a conscious arrangement of matter I think that might make it theoretically possible to create a conscious arrangement of matter that is not biological. It just sounds to me like something that would be extraordinarily difficult to do and what I was trying to say is that I am not looking at this from the point of view of being an idealist and believing in something immaterial like a soul or separation between mind and body because I don't believe those things. It just sounds difficult.

I don't believe that consciousness is inherent in all matter. I think that idea is called panpsychism, and I think a small number of scientists play around with that idea, but I don't think it has much scientific respect and sounds pretty bizarre to me.
GaySpacePirateKing
 
Posts: 235
+1s received: 86
Joined: 14 January 2019, 18:53

Re: The future with artificial intelligence?

Unread postby PopTart » 30 July 2020, 10:49

GaySpacePirateKing wrote:All we've done though is send probes and rovers out, examine the universe through telescopes and talk about mining other celestial bodies. I was thinking more about going out into space and establishing permanent human presence beyond Earth and the solar system. I agree with you anyway though that we are more likely to develop true AI before any of that.
I get it, I thought you might have meant interstellar travel. I doubt we will have that in this millenium, let alone century, but I hope to be proven wrong, ofcourse. :P It just seems at the moment, that faster than light travel, isn't possible. Without it, we would be looking at extremely long travel times just to reach our nearest neighbour, Proxima Centauri, a mere 4.2 light years :cry:

But I do think people fail to appreciate, how much we have learned about the universe, from our little blue ball. Mostly an opportunity afforded to us, because of the period in the universes history, we have been born into. There is a theory rolling around, that any earlier, and background noise, heat and radiation, would have made it hard for us to look too far out into the universe and see much of discernable value beyond our own galaxy and that, there will come a time, with the expansion of the universe, that other galaxies will be moving away from us, so fast, that the light from them will never reach us. The skies will go dark beyond our local group. It is this exact moment in time, that allows us to see back to the beginning of the universe and potentially predict it's eventual end. A civilisation that arose in that far flung future, would never know, that there were millions of other galaxies out there, that they could never see, let alon reach :glasses:

All of that, from mostly hairless primates, with rudimentary brains, cumbersome verbal communication and a tendancy to be fractious. :bowdown:

GaySpacePirateKing wrote:
PopTart wrote:It's as I said above, so you're right, we have hit a brick wall coding software to be AI level Intelligent, so we aren't using that approach anymore. Instead, engineers are developing self learning software, that can literally teach itself. Or write it's own code. We're talking multiple layers of self learning, neural networks, which mimic organic neurons and synapses. It's called deep learning.


Where is the worry with it though?

Are you worried about AI over time becoming more and more intelligent until it becomes human like, then decides to kill us all? That seems pretty far fetched I think, and AI might never develop sentience on its own just by writing its own code and programmes. It might get it wrong.

Or is the worry more to do with machines that don't have any sense of morality writing their own codes and programs and then coding something unethical into their programming? Surely if there was any risk of that though we would have the sense not to have the machines making complicated decisions for us or handling complicated systems that could result in life or death.
The worry lies in the inherent unpredictability involved. Let me elucidate somewhat.
Are you worried about AI over time becoming more and more intelligent until it becomes human like, then decides to kill us all?
Yes and no. It is questionable if an AI would have any form of emotion, since, human emotion is largely a quality of our organic bodies, they are the result of our biology and not our intellect, so I do doubt we would get an "angry" AI that would rampage and "terminate" us skynet style out of fear. I would be strongly oppossed to programing an AI with simulated emotions, simply because it would be inherently dangerous and also, needless.

But a sufficiently advanced AI, but not super intelligent, may be given an innocuous set of instructions or a purpose and in the pursuit of those intructions or purpose, may inadvertently cause harm. AI thinking is already oblique. AI, don't think in organic terms, they seem to arrive at very surprising results, that we can't predict consistently. Add to that we can't check their code and everytime you use an AI, your gambling that it might go horribly wrong.

A well known, if somewhat over used example, is the paperclip maximizer. A company develops an AI to make paperclips, without think about it, the humans simply instruct the AI to maximise paperclip production, over time it will do this task and become increasingly better at it, building better machines to build them more efficiently, as time passed, it would eventually begin to run into supply issues, long story short... everything becomes paperclips, including us, as we are unable to resist the relentless paperclip overlord. It exists across a vast network of dispersed intelligence that can't be realistically destroyed, it can copy itself, design, build and field defenses to protect it's primary objective: Make more paperclips. It can even design other AI, whose sole purpose is to prevent humans from interfering in it's goal. These would be the SKYnet type AI, only they aren't angry or frightened by humanity, they are just trying to solve a problem. Us. :awesome:

Admittedly, this is an example, where an AI, is simply doing what it is told, the humans who gave it instruction are at fault, for not giving it better instruction, but what of those instances, in which, an AI malfunctions and begins to behave in a destructive manner, because of faulty reasoning and logic? Again, current engineers are finding, they have a supremely hard time, deciphering the AI's self written code. They aren't able to see the connections and computations that AI are making, to arrive at conclusions and that means, we may have AI, that can begin to malfunction and unless it was immediately obvious it was malfunctioning, it could do untold harm. We already use AI in programs like Deep Patient, which indentifies better than humans, simply from browsing medical record data, potential illnesses in patients. Hell, it is better at identifying schizophrenia, than human doctors! Deep patient more accurately predicts shizophrenia in patients, who haven't even begun to show signs of symptoms.

Consider, we become reliant on this AI and it's so reliable at first, we stop questioning it's diagnosis. But in time, due to an error, it begins to misdiagnose people. It could be really hard to prove wrong, it might be impossible to identify where the error is taking place. It's a small example of how a seemingly helpful AI, can go wrong.

Both examples ignore the possibility of AI, designed specifically with an intent to harm people, such as those we might find in national militaries. Imagine something going wrong with one of those. We can say, well, lets not use them for that, but truth is, thats the first thing they are being developed for. China, the US, UK, even Russia are in a technological arms race to develop artificially intelligent machines of war. It's rather scary. :noes:

I ofcourse, don't feel we should not develop AI, but that the hast with which we are running blindly into the matter, is reckless and we should take the time to make sure we do it right and fully understand what we are getting into. We could other create an AI, so far ahead of us, that it might regard us, as we regard the fish in the sea, from which we evolved. We don't them... but there is gas under that reef that we need and the fish and their habitat, don't really register as a terrible loss in the extraction of that gas. :runaway:

Or is the worry more to do with machines that don't have any sense of morality writing their own codes and programs and then coding something unethical into their programming? Surely if there was any risk of that though we would have the sense not to have the machines making complicated decisions for us or handling complicated systems that could result in life or death.
Thats just it, once you give a machine the capacity to learn, you don't get to decide, what it learns. You can try to limit it, but we don't even know "how" they learn really, "how" they think. How then can we know what to tell them not to do? We can't. We can't anticipate that which we can't concieve of. Alien mind, alien thoughts. AI could even develop their own "blue and orange" morality, as oppossed to "black and white" So far outside of our understanding as to defy human logic.


GaySpacePirateKing wrote::( I had my hopes up for robot dick!
You know, I honestly believe this will be the main driver for the development of human form robotics! :lol: I anticipate stiff support to begin with, followed by outrage as the worlds oldest profession is made redundant.

GaySpacePirateKing wrote:
PopTart wrote:I'm not saying it will be perfect or even possible in the manner we might be thinking of, but the reality it, we are reaching a point, at which we could concievably store a humans sum total knowledge, experience, thoughts and feelings, in terms of space, at the very least.


Thats honestly really pretty cool. If we can store that much data maybe we could create whole virtual reality worlds?
Oh thats nothing! It's becoming clear that machines can think at a speed far faster than the organic mind. There are experts who are now suggesting, that a human conciousness, running on hardware, could concieveably experience years of perceptual existance, in minutes. Have control over it's own internal world and even identity. You could possibly, for example, decide to create a simulation of our world, but maybe you want the sky to be purple and it's always daytime and you populate this world with simulacrum of yourself and people you know and generate others who are completely random and you could experience the "story" of each person, living a million lifetimes and in the "real" world, only a handful of months or years have passed. :awesome: It's bizarre to imagine but the only limitations are computational power, memory storage and heat dissipation. Thats where RL space colonisation comes in. Build a giant server on a very cold planet... say... Titan perhaps? Very cold, you could run alot of programs at high capacity on a planet with ambient temperatures of around -180°c, plus you have all those hydrocarbons about to act as fuel for power generation! :nod:

This video, while being abit dry, is actually so fascinating, if you can sit through it, I'd recommend it, if your interested in this side of things.


GaySpacePirateKing wrote:Sounds like I agree with you here. I don't think true AI is impossible, and in fact since we are a conscious arrangement of matter I think that might make it theoretically possible to create a conscious arrangement of matter that is not biological. It just sounds to me like something that would be extraordinarily difficult to do and what I was trying to say is that I am not looking at this from the point of view of being an idealist and believing in something immaterial like a soul or separation between mind and body because I don't believe those things. It just sounds difficult.

I don't believe that consciousness is inherent in all matter. I think that idea is called panpsychism, and I think a small number of scientists play around with that idea, but I don't think it has much scientific respect and sounds pretty bizarre to me.
I get it. You're position is pretty clear and it's not one I entirely disagree with, more I just needed to grasp where you where coming from. I just think that conciousness will be easier to create than anyone imagines. I look at animals and I see an intelligence and awareness of self there, that belies it's complexity, owing to the form that very conciousness takes. I mean, Corvids, Dolphins and many primates and great apes, have the intellect and intelligence of 4 yrs old or better (or a little less) conciousness is difficult to define but I suspect it is much broader than we would like to admit and if that is the case, if the prerequisites for it's presence, can be as simple a certain threshold of "thinkyness" we should give the matter of bestowing consciousness on something like a machine, serious thought and consideration.
ImageImage
User avatar
PopTart
 
Posts: 2917
+1s received: 2379
Joined: 12 December 2017, 11:15
Country: United Kingdom (gb)

Re: The future with artificial intelligence?

Unread postby Magic J » 1 August 2020, 15:09

Brenden wrote:I am both concerned and excited. I fear that the people who are currently writing the code and building the machines are not giving due consideration to the ethics and possible consequences; they’re just pushing ahead and kicking the bucket down the road.

Move fast, break things, keep the profit, externalise the costs. In this, the tech companies are much like any other corporation. Making others foot the bill to clean up your mess is business 101. :P

Poptart wrote:We have now, instances of artifical intelligences, formulating conclusions and we don't know how they arrive at them and no engineer can explain to you how it reasoned through the data it recieved and arrived at the conclusions it does. That is a concern.

I get in dark moods when I think of this stuff. I think that the extension of this kind of AI into more and more areas of our lives has the potential to be profoundly alienating. I mean, we create complex systems which are beyond the ability of a human mind to actually comprehend all the time, and there's an increasing sense of powerlessness that comes with that. This would be way beyond human comprehension. Mastery of our own lives would become even more elusive, and that's concerning to me.

I lean concerned and pessimistic. Unless drastic changes are made with regards to how we organise our economic system, I assume that the further development of AI technology will primarily serve the interests of tech corporations, who'll continue to establish firm monopolies on data. But now COVID is happening, the state's back and it's all up in the air, so I dunno. In general, I think there needs to be more democratic control of data if we're to distribute the benefits of AI fairly, and avoid coercive control.
Drugs and Guns for Everyone
User avatar
Magic J
 
Posts: 1163
+1s received: 842
Joined: 20 December 2012, 23:06
Location: Scotland
Country: United Kingdom (gb)

Re: The future with artificial intelligence?

Unread postby poolerboy0077 » 1 August 2020, 16:12

Brenden wrote:I am both concerned and excited. I fear that the people who are currently writing the code and building the machines are not giving due consideration to the ethics and possible consequences; they’re just pushing ahead and kicking the bucket down the road.

Relaaaaaax. Derek says that’s just crazy talk and that things will work themselves out because of Thomas Malthus was a normie noob and people can achieve anything if they set their minds to it. Stop getting in the way of capitalist innovation, cuck!
Blow: "Nowadays even Liam can release an album of his screechy vocals and it'll probably go #1..."
Ramzus: I can admit that I'm horny just about 24/7
homomorphism: I used to not think your name was deshay and that Erick was just being racist
Hunter: sometimes I think I was literally born to be a pornstar
User avatar
poolerboy0077
 
Posts: 9033
+1s received: 2360
Joined: 20 December 2012, 21:20
Country: United States (us)

Re: The future with artificial intelligence?

Unread postby Derek » 1 August 2020, 20:52

Malthus was a normie though. I don't see how that's disputable.
User avatar
Derek
 
Posts: 6321
+1s received: 2323
Joined: 21 December 2012, 02:12
Country: United States (us)

Re: The future with artificial intelligence?

Unread postby poolerboy0077 » 1 August 2020, 21:00

We need an alpha chad like Norman Borlaug. He’s such a hottie. Fertilize and double harvest our cultivars, daddy.
Blow: "Nowadays even Liam can release an album of his screechy vocals and it'll probably go #1..."
Ramzus: I can admit that I'm horny just about 24/7
homomorphism: I used to not think your name was deshay and that Erick was just being racist
Hunter: sometimes I think I was literally born to be a pornstar
User avatar
poolerboy0077
 
Posts: 9033
+1s received: 2360
Joined: 20 December 2012, 21:20
Country: United States (us)

Re: The future with artificial intelligence?

Unread postby PopTart » 2 August 2020, 08:47

Magic J wrote:
Poptart wrote:We have now, instances of artifical intelligences, formulating conclusions and we don't know how they arrive at them and no engineer can explain to you how it reasoned through the data it recieved and arrived at the conclusions it does. That is a concern.

I get in dark moods when I think of this stuff. I think that the extension of this kind of AI into more and more areas of our lives has the potential to be profoundly alienating. I mean, we create complex systems which are beyond the ability of a human mind to actually comprehend all the time, and there's an increasing sense of powerlessness that comes with that. This would be way beyond human comprehension. Mastery of our own lives would become even more elusive, and that's concerning to me.

I lean concerned and pessimistic. Unless drastic changes are made with regards to how we organise our economic system, I assume that the further development of AI technology will primarily serve the interests of tech corporations, who'll continue to establish firm monopolies on data. But now COVID is happening, the state's back and it's all up in the air, so I dunno. In general, I think there needs to be more democratic control of data if we're to distribute the benefits of AI fairly, and avoid coercive control.

This is a genuine concern for me aswell and in truth, one of the more realistic ones. Any new technology, resting solely in the hands corporate interests and the handful of people that control those, is problematic.

The only thing I can see preventing that concentration of power and control, is the ease with which, AI might be copied and disseminated. In that way, it could be possible for everyone to have a personal AI, so long as the platforms and technology existed to allow that.

It is possible that AI and the technological knowledge that comes from it, could also change the way we interact, for the worse, but it could also allow new ways of communicating, sharing emotions for example.

It's the fact we can't accurately, or even reasonably predict what will come to pass and who will be the gatekeepers of that knowledge.

Facts made even scarier, when we get into the field of mind-machine interface and the ability to outright reprogram people (a genuinely viable ability with just a handful of technological breakthroughs.)
ImageImage
User avatar
PopTart
 
Posts: 2917
+1s received: 2379
Joined: 12 December 2017, 11:15
Country: United Kingdom (gb)

Re: The future with artificial intelligence?

Unread postby Eos » 2 August 2020, 18:27

I don't know, I have yet to see a real IA. For now it is just a fancy name for machine learning, and that is definitely not a threat but is also very limited. We are very far away from seeing Terminator.
And strangely enough, I'm more afraid of humans than AI.
Eos
 
Posts: 154
+1s received: 63
Joined: 2 April 2019, 07:30
Country: France (fr)

Re: The future with artificial intelligence?

Unread postby GaySpacePirateKing » 2 August 2020, 23:43

PopTart wrote:So too, could we find ourselves, no longer having to find answers for ourselves. Our machine servitors learn for us and with each generation, the need to learn, even the means of study itself, falls by the way side. Sure, this is more of an existential threat, but I see it being a real one


MagicJ wrote:I mean, we create complex systems which are beyond the ability of a human mind to actually comprehend all the time, and there's an increasing sense of powerlessness that comes with that. This would be way beyond human comprehension. Mastery of our own lives would become even more elusive, and that's concerning to me.


Most of us are not going to live up to Einstein is it not kind of like this with AI also? There are already many things that most of are never going to comprehend so this doesn't really bother me too much. The sense of life being meaningless does but not the idea of things being purposeless because AI knows better or can do it better. I can't see people no longer wanting to learn or do the things they like just because AI can do it better.

As for existence being meaningless an AI would have to deal with that problem too. Everything is going to die, the universe will end and it will be as if none of it ever happened. That stuff really bothers me and AI would have to grapple with it too. Which raises other questions actually like how long would an AI live? Would they be incredibly bored? How would a being with their level of intellect deal with the meaningless of their existence?

Sorry for depressing everyone :sadblue:
GaySpacePirateKing
 
Posts: 235
+1s received: 86
Joined: 14 January 2019, 18:53


Recently active
Users browsing this forum: Bing [Bot], CommonCrawl [Bot], Google [Bot], poolerboy0077, Yandex [Bot] and 88 guests