Demonic AI
by Amelia Hoskins · Published · Updated
Apple Chip named BALTRA
Typical of AI dreamers to use demonic symbolism in the language of their wares
Are these corps just having a laugh or does their language reflect long term goals; such as transhumanism and depopulation? Do atheistic corporations take some perverse delight using anti-Christian language? ‘Baal’ always refers to demonic entities. The suffix ‘tra’ used usually as a prefix: to traverse, transit: gives the word association with a transitional phase for humans. "a demon journey through a crossing or a portal". The naming is an arrogant poke by techno-corporatist desire for control - into a dystopian future. Mike Adams explains why the new chip has been demonically named BALTRA, envoking occult demons and the idea of "living in an alternate dimension".
BALTRA is also the name of a Galapagos Island where Darwin developed his Origin of the Species. Mike Adams suggests the philosophy behind Apple is:
"creating a microchip which is the origin of the species of demonic AI that will defeat humanity... ...Charles Darwin credited with being the father of genetics. His cousin Sir Francis Gaulton created the term genetics and drew on Darwin’s work to push for genetics; as a science for improving the human species".
"Is the Apple AI chip not a false god? A false god that can create the likeness of life; that can mimic the person but without the spirit."
The more powerful AI becomes, the more likely devilish attributes will emerge: whether by doing the programmed bidding of controlling technocracy, or by setting course within its own 'logic' and algorithms to distort the aims and ideals of most of humanity. These systems are a new and unregulated in the race to get AGI, and could run their own 'programs' as an electronic buzz-brain playing games with information. If trained in efficiency, it could logically 'presume' that humankind is a danger to the planet.
Mike Adams on the new APPLE chip BALTRA
BAL - BAAL or Satan, or the devil *** TRA - for TRAnshumanism
Daniel Schmactenberger takes a worse than sceptical view of AI development when he explains the hot head psyche of those in the AI space.
There's no chance of AI safety testing because its a race to be the best AGI, to control everything. If it cant be stopped then every player is out to win. The largest most powerful global companies are at the forefront of AI development and implementation. Even AI developers are concerned they won't have a job, so want to be on the winning team.
"So, if I want to get ahead and at minimum not be left behind, I don't want to be against the thing that I can't stop and everybody who's writing is going to get ahead and everyone who's not is going to be left behind, so I would like to believe that writing that thing is good and maybe I can steer it in a good direction right? [Like duh! ]"
Although there are groups set up to discuss safety issues, such as Future of Life the race means it is not likely to be really considered.
"Sam Altman has said on stage that AI will probably kill everyone. Everyone laughs and the market goes up."
The top companies are now investing heavily in AI, especially defence. AI is useful for battle planning with huge complex systems for large amounts of intelligence gathering. People couldn't work at that complex pace, so the intelligence gathered will become assimilated into whatever AI 'reasoning' has been programmed. [We've seen how drones are used in Ukraine and Gaza controlled by AI like 'Lavender', as they test out systems: with no regard to innocent civilian lives, as a machine would not have.]
'The top ten companies by market cap right now are AI'. [Ref Nvidia's market cap. ] they're all AI companies right, like so Apple IOS not an AI company; Microsoft is not an AI company exclusively, but fundamentally that's one of their primary plays. Right now arguably you could say the only one that is in the top 10 right now, that isn't, is like Saudi Aramco, but increasingly AI [because] they're using their money from oil to fund AI efforts massively and they're using AI to fund and advance their oil efforts so obviously this is NEW like that they're ALL AI companies. It WAS oil and defence and banking. That market sector being as big as it is and those companies being as powerful as they are means there's a lot of marketing and a lot of lobbying and a lot of public influence."
The control power of defence and banking already control governments and our living conditions, in ways we do not agree with, so with additional AI, the control can only become MORE. 'The "think tank industrial complex" dictates for which projects the money goes to AI. "Ideas get up-regulated by whoever it is that has marketing budgets.
"For environmentalists' concerned about how much AI is actually going to radically damage the environment? Do they have comparable marketing budgets and do they have like the correlation of force and means between those ? Its as bad as it could be; it's kind of like peace activists compared to the military-industrial complex correlation of force and means."
AI safety considerations?
[See below box 'Alisomar AI principles']
There's this joke in the AI safety community that the fastest way to accelerate AI risk is starting an AI safety org because you know: open AI was started as an AI safety org to try to protect the world against the dangers of deep mind and now it is radically accelerating the anthropic left because they were scared of its acceleration.
They wanted to make a safety org but then they're like hey in order to be able to really test our tech we have to build tech which means we need a lot of money so they took 300 million from Google...
Daniel has talked to people in the sector who say "my job will be automated by AI soon". Some start ups are developing in order to be acquired by large corps for a lot of money after just a few years developing the product.
"There's no good sign that the answer is going to come from us. We seem pretty intractably fucked. Maybe something radically smarter than us can solve all the stuff which is of course kind of like a regress to a childhood psyche still wanting a parent who's going to kind of figure it out, so AI does a very believable job of that currently for some people."
Daniel explains there's only one game: 'to be leader of or part of the group that makes it to AGI Supremacy first'.
"...if you can't stop it, every attempt to try to stop it is futile. It sounds a lot like the Borg or like Soron saying to Gandalf like it's inevitable and you must join Soron..... so you hold this inevitability argument and so now the fact that I'm not trying to stop it isn't an ethical issue cuz couldn't anyways."
The argument for motivated reasoning:
“...this is one of the most fascinating motivated reasoning cases.....that captured this many smart people, this intensely, this quickly. It is the superlative case of motivated reasoning in my life experience where everybody who ends up saying “I need to run an AI company based on this argument” says “The emergence of AGI is now inevitable".
The tech philosophers:
"... you'll hear Nick Bostrom say publicly recently there are lots of reasons to prefer to be digital than biological. Biological things die and suffer and are limited and only work on this planet. AI things can work in space and be eternal and become digital gods."
Ted Chu - Transhumanism and the Human Potential:
‘there's this kind of religious narrative of humans ascending into Angels or higher selves or overmens [ubermensche] or gods or something like that, and AI and synthetic bio and brain computer interface is what actually delivers that, where we can do whole brain emulation and upload our Consciousness onto the cloud and move from being slime based computational systems to crystallin based computational systems that can, you know, live forever. So that 'wackadoodle' metaphysical idea is pretty universal and dominant in the AI acceleration space"
Asilomar AI Principles
General ethical concerns regarding AI have led to the formulation of the so-called Asilomar AI Principles, which were developed at a conference in Asilomar, California, in 2017 and have been subscribed to by more than 1,000 AI research workers. Endorsers include Elon Musk, Jaan Tallinn, and the late Stephen Hawking. "Asilomar AI Principles" Future of Life Institute
Referenced quoted from John Lennox's book - '2084 and The AI Revolution' (P.75): refers to ethics and safety guidelines
Ethics
- Consequentialist or utilitarian ethics. Here actions are graded in terms of their consequences and following the principle that one must seek for the maximum benefit for the maximum number of people.
- Deontological ethics. The word deontological comes from the Greek deontos, meaning “one must.” This is the position that regards duty as more important than happiness. Kantian ethics and divine command theory or Christian ethics fit into this category.
- Virtue ethics. This goes back to Aristotle, with the key idea being that we should make decisions that show virtue in character.
- Egocentric ethics. This is the view that whatever I want and decide is right.
SEE post on demonic language 'Cyber Satan'