Robots - search results
There are already more than 101 million working age Americans that are not employed and 20 percent of the families in the entire country do not have a single member that has a job. So what in the world are we going to do when robots start taking millions upon millions more of our jobs? [...]
The wars of the future are very likely going to resemble many of the science fiction movies that we are watching right now. The U.S. military is in a global race to create the “technologies of the future”, and some of the things that they are coming up with are disturbing to say the least. Are you ready for future conflicts where “Iron Men”, “super soldiers”, “Terminator robots”, and autonomous drones do most of the killing? Are you ready for American soldiers that have been genetically modified to perform superhuman feats of strength, run at superhuman speeds and even regrow limbs? The truth is that all of this stuff is being developed right now and most Americans have no idea that it is happening.
Have you enjoyed watching the “Iron Man” movies that have come out in recent years? Tens of millions of Americans have flocked to see those films, and now they are being used as inspiration to create a new generation of “exoskeletons” for U.S. soldiers. In fact, it is being reported that this revolutionary new “smart armor” was specifically “inspired by Tony Stark’s legendary nano suit used in the Iron Man movie series“. This armor is currently being developed at MIT, and according to the BBC this armor will give U.S. troops “superhuman strength”…
The US Army is working to develop “revolutionary” smart armour that would give its troops “superhuman strength”.
It is calling on the technology industry, government labs and academia to help build the Iron Man-style suit.
Other exoskeletons that allow soldiers to carry large loads much further have already been tested by the army.
The Tactical Assault Light Operator Suit (Talos) would have such a frame but would also have layers of smart materials fitted with sensors.
The suit would also need to have wide-area networking and a wearable computer similar to Google Glass, the US Army said.
Most people would not object to high tech armor for the military, but what about genetically modifying soldiers themselves?
That is an entirely different thing altogether.
In a previous article, I included a quote from a Daily Mail article that discussed how DARPA is now working on ways to create “super soldiers” that will be able to run at Olympic speeds and regrow limbs that have been blown off…
Tomorrow’s soldiers could be able to run at Olympic speeds and will be able to go for days without food or sleep, if new research into gene manipulation is successful.
According to the U.S. Army’s plans for the future, their soldiers will be able to carry huge weights, live off their fat stores for extended periods and even regrow limbs blown apart by bombs.
The plans were revealed by novelist Simon Conway, who was granted behind-the-scenes access to the Pentagon’s high-tech Defence Advanced Research Projects Agency.
How in the world could those things possibly be accomplished?
Through genetic modification of course.
A different Daily Mail article explained how this might work…
Most gene modification techniques involve placing genetically modified DNA inside a virus and injecting it into the human body. The virus then enters human cells, and its modified DNA attaches itself to the human DNA inside those cells.
But should we really be using viruses to modify the DNA of our soldiers?
Should we really be modifying the DNA of anyone?
Of course not.
This is very dangerous territory. Just because we now have the ability to “play God” and alter human DNA does not mean that we should. If our scientists are not careful, they could end up creating monsters far beyond what anyone could imagine right now. And once Pandora’s Box is opened and these super soldiers start spreading their DNA around, it simply will not be possible to put the genie back into the bottle.
Another area where the U.S. military is pushing boundaries is in the field of robotics. For example, Northrop Grumman has developed a 1 1/2-ton unmanned killing machine that is known as MADSS…
The MADSS is one mean robot. Developed by defense industry leader Northrop Grumman and currently being showcased at the Fort Benning, Ga. “Robotics Rodeo,” the MADSS is a 1 1/2-ton unmanned ground vehicle designed to provide soldiers with covering fire while cutting down targets.
Make no mistake, it’s an automatic shooting machine, But it requires people to operate it and set targets. The MADSS — Mobile Armed Dismount Support System — tracks and fires on targets only once it gets the green light. It won’t shoot unless a soldier is directing it.
It’s half killer robot, half killer giant remote-control car. While its top speed hasn’t been stated, Northrop Grumman has said that it can follow troops on foot at about five miles an hour over rough terrain that conventional combat vehicles would find impassable.
But a remote control vehicle is one thing.
A “Terminator-like Atlas robot” is another.
Right now, Boston Dynamics is working on a 330 pound humanoid robot that looks like something out of a bad science fiction movie…
Finally, there’s fresh footage of Boston Dynamics’ Terminator-like Atlas robot, which was unveiled earlier this year. The 6-foot, 330-pound humanoid, which may or may not be a future robot infantryman, is designed to use tools and walk over rough terrain.
Check it out stomping over several boxes of rocks like nobody’s business, and then standing on one foot while being hit with a swinging weight.
Of course, it laughs on the inside at these pathetic human challenges.
You can view Atlas in action right here…
And personally, I think that the very human-looking robot known as “Petman” is even creepier.
You can see “Petman” in action right here…
As long as humans are controlling this kind of technology, at least there are some safety checks.
But what if we started creating killing machines that made their own decisions?
That sounds crazy, but according to a recent National Journal article that is exactly what is being developed. As you read this, scientists are working on ways to enable drones “to make even lethal decisions autonomously”…
Scientists, engineers and policymakers are all figuring out ways drones can be used better and more smartly, more precise and less damaging to civilians, with longer range and better staying power. One method under development is by increasing autonomy on the drone itself.
Eventually, drones may have the technical ability to make even lethal decisions autonomously: to respond to a programmed set of inputs, select a target and fire their weapons without a human reviewing or checking the result. Yet the idea of the U.S. military deploying a lethal autonomous robot, or LAR, is sparking controversy. Though autonomy might address some of the current downsides of how drones are used, they introduce new downsides policymakers are only just learning to grapple with.
The basic conceit behind a LAR is that it can outperform and outthink a human operator. “If a drone’s system is sophisticated enough, it could be less emotional, more selective and able to provide force in a way that achieves a tactical objective with the least harm,” said Purdue University Professor Samuel Liles. “A lethal autonomous robot can aim better, target better, select better, and in general be a better asset with the linked ISR [intelligence, surveillance, and reconnaissance] packages it can run.”
Would you be comfortable with unmanned drones flying over your neighborhood that are able to decide on their own whether to kill you or not?
I certainly would not be.
This kind of reminds me of the killer drones in the new movie “Oblivion” that Tom Cruise starred in. I certainly would never want such technology being used to patrol the streets of America.
And what if we lose control over this kind of technology once it becomes widespread someday?
In the past, such notions where laughable. They were used as plots for bad science fiction movies and that was about it.
But now we are moving into a time when science fiction is becoming science reality.
Are you ready for that?
The wars of the future are very likely going to resemble many of the science fiction movies that we are watching right now. The U.S. military is in a global race to create the “technologies of the future”, and some of the things that they are coming up with are disturbing to say the [...]
Autonomous Robots Coming Soon to a Hospital Near You
Posted on Feb 8, 2013
FDA has approved the use of autonomous telemedicine robots in U.S. hospitals; although President Obama’s second term inaugural speech was inclusive and liberal, it failed to mention the growing crisis of inequality our nation faces; meanwhile, a new book details the scandalous antics of hard-partying authors. These discoveries and more below.
On a regular basis, Truthdig brings you the news items and odds and ends that have found their way to Larry Gross, director of the USC Annenberg School for Communication. A specialist in media and culture, art and communication, visual communication and media portrayals of minorities, Gross helped found the field of gay and lesbian studies.
FDA Approves First Autonomous Telemedicine Robot for Use in Hospitals
iRobot and InTouch Health have received the FDA’s stamp of approval for their RP-VITA autonomous robot, clearing the way for the bot to begin wandering hospitals throughout the United States.
Classical: What If It’s (Gasp) Entertainment?
It just may be time to give up on one of the most exhausted, long-lived cliches about classical music: that it is “high” art, uniquely deserving respect and support for its greatness.
Even Balzac Had To Intern
A young man graduates from college. At his father’s insistence, he begins interning at a law firm.
Value Evolution, Not Just Revolution, in Higher Ed
Ever since the country’s top universities teamed up last year in loose federations to offer free online classes to the masses, MOOCs have become a household word in higher-education circles.
The Real Problem With Colleges’ Business Model
The simple problem with the existing higher education business model in the United States is that it has involved aggregate per student spending that rises faster than inflation for a long time.
A continuing source of frustration for many Americans has been the fact that no one on Wall Street has gone to jail for the mortgage fraud that nearly crashed the world financial system in 2008.
The Missing Link in Obama’s Liberalism
Consensus! Left and right agree that Barack Obama not only gave a powerfully liberal inaugural address, but that he touched on all the important bases.
What You Need to Know About Genetically Engineered Food
American farmers started growing genetically engineered (GE) crops (which are also commonly referred to as “GMOs”) in 1996, and now plant 165 million acres annually.
Netflix, ‘House of Cards,’ and the Golden Age of Television
TV is replacing movies as elite entertainment, because players like Netflix, HBO, and AMC are in an arms race for lush, high-quality shows
A Portrait of the Adult Children of Immigrants
Second-generation Americans—the 20 million adult U.S.-born children of immigrants—are substantially better off than immigrants themselves on key measures of socioeconomic attainment, according to a new Pew Research Center analysis of U.S. Census Bureau data.
Why Social Movements Should Ignore Social Media
There are two ways to be wrong about the Internet.
How the Internet Reinforces Inequality in the Real World
Maps have always had a way of bluntly illustrating power.
The Study That Could Upend Everything We Thought We Knew About Declining Urban Crime
Bill Bratton took the job as commissioner of the New York Police Department in 1994 under Mayor Rudy Giuliani, setting the stage for a Cinderella story in urban law enforcement that went on to change how virtually every major U.S. city tackles crime.
America’s New Vietnam Syndrome
The kind of questioning that Hagel had to face at his confirmation hearing only goes to show that the ideological divisions of the 1970s have survived into the 21st century, reborn now as arguments over whether Iraq was ‘worth it.’
Get truth delivered to
your inbox every week.
Previous item: Your Newspaper Works for the State
New and Improved Comments
(Photo: CartoonArts International / The New York Times Syndicate).It's taken me a while to get around to Bob Gordon's stimulating essay suggesting that the great days of economic growth are behind us. It's not that different from things he's been saying before, and I have in the past had a lot of sympathy for that view. I now believe, however, that his technological pessimism is wrong — or if you prefer, it's the wrong kind of pessimism. But this is definitely a discussion worth having.
Mr. Gordon, an economics professor at Northwestern University, argues, rightly in my view, that we've really had three industrial revolutions so far, each based on a different cluster of technologies. In an essay published in September by the Center for Economic Policy Research, Mr. Gordon writes:
"The analysis in my paper links periods of slow and rapid growth to the timing of the three industrial revolutions: IR #1 (steam, railroads) from 1750 to 1830; IR #2 (electricity, internal combustion engine, running water, indoor toilets, communications, entertainment, chemicals, petroleum) from 1870 to 1900; and IR #3 (computers, the Web, mobile phones) from 1960 to present."
M. Gordon then argues that IR#2 was by far the most dramatic, which again seems right. Think of the America shown in the film "Lincoln," which is a society shaped by IR #1 but not yet transformed by IR #2. It was a society in which people could travel much farther and faster than ever before — but when they got to their destinations, they were still living in a horse-drawn society. Most people still lived on farms and the cities were cruder and dirtier than we can easily imagine. By the 1920s, however, urban America was already recognizably a modern society. What Mr. Gordon then does is suggest that IR #3 has already mostly run its course, that all our mobile devices and so on are new and fun but not that fundamental.
It's good to have someone questioning the tech euphoria, but I've been looking into technology issues a lot lately, and I'm pretty sure he's wrong: the information technology revolution has only begun to have its impact. Consider for a moment a sort of fantasy technology scenario in which we can produce intelligent robots able to do everything a person can do. Clearly, such a technology would remove all limits on per-capita gross domestic product, as long as you don't count robots among the capitas. All you need to do is keep raising the ratio of robots to humans, and you get whatever G.D.P. you want.
Now, that's not happening — and in fact, as I understand it, not that much progress has been made in producing machines that think the way we do. But it turns out that there are other ways of producing very smart machines. In particular, Big Data — the use of huge databases of things like spoken conversations — apparently makes it possible for machines to perform tasks that even a few years ago were really only possible for people. Speech recognition is still imperfect, but it is vastly better than it was and it's improving rapidly, not because we've managed to emulate human understanding but because we've found data-intensive ways of interpreting speech in a very nonhuman way. And this means that in a sense we are moving toward something like my intelligent-robots world; many, many tasks are becoming machine-friendly. This in turn means that Mr. Gordon is probably wrong about diminishing returns from technology.
Ah, you ask, but what about the people? Very good question. Smart machines may make higher G.D.P. possible, but they will also reduce the demand for people — including smart people. So we could be looking at a society that grows ever richer, but in which all the gains in wealth accrue to whoever owns the robots. And then eventually Skynet decides to kill us all, but that's another story.
Anyway, interesting stuff to speculate about — and not irrelevant to policy, either, since so much of the debate over entitlements is about what is supposed to happen decades from now.
© 2013 The New York Times Company
Truthout has licensed this content. It may not be reproduced by any other source and is not covered by our Creative Commons license.
Paul Krugman joined The New York Times in 1999 as a columnist on the Op-Ed page and continues as a professor of economics and international affairs at Princeton University. He was awarded the Nobel in economic science in 2008. Mr Krugman is the author or editor of 20 books and more than 200 papers in professional journals and edited volumes, including "The Return of Depression Economics" (2008) and "The Conscience of a Liberal" (2007).
Copyright 2013 The New York Times.
Facebook’s Artificial Intelligence Agents Creating their own Language is more Normal than People Think
Did you know that 89 percent of all minimum wage workers in the United States are not teens? At this point, the average age of a minimum wage worker in this country is 36, and 56 percent of them are women. Millions upon millions of Americans are working as hard as they can (often that [...]
The post The Average Age Of A Minimum Wage Worker In America Is 36 appeared first on The Economic Collapse.
Barack Obama is secretly negotiating a global economic treaty which would destroy thousands of American businesses and millions of good paying American jobs. In other words, it would be the final nail in the coffin for America’s economic infrastructure. Obama knows that if the American people actually knew what was in this treaty that they [...]
The post Ultra-Secrecy Surrounds Barack Obama’s New Global Economic Treaty appeared first on The Economic Collapse.
There are things we are born able to do like eating, laughing and crying and others we pick up without much of an effort such as walking, speaking and fighting, but without strict institutional education there is no way that we can ever become a functioning member of the Matrix. We must be indoctrinated, sent to Matrix boot camp, which of course is school. How else could you take a hunter and turn him into a corporate slave, submissive to clocks, countless bosses, monotony and uniformity?
Children naturally know who they are, they have no existential angst, but schools immediately begin driving home the point of schedules, rules, lists and grades which inevitably lead the students to the concept of who they aren't. We drill the little ones until they learn to count money, tell time, measure progress, stand in line, keep silent and endure submission. They learn they aren't free and they are separated from everyone else and the world itself by a myriad of divides, names and languages.
It can’t be stressed enough how much education is simply inculcating people with the clock and the idea of a forced identity. What child when she first goes to school isn't taken back to hear herself referred to by her full name?
It’s not as if language itself isn't sufficiently abstract- nothing must be left without a category. Suzy can’t just be Suzy- she is a citizen of a country and a state, a member of a religion and a product of a civilization, many of which have flags, mascots, armies, uniforms, currencies and languages. Once all the mascots, tag lines and corporate creeds are learned, then history can begin to be taught. The great epic myths invented and conveniently woven into the archetypes which have come down through the ages cement this matrix into the child’s mind.
Even the language that she speaks without effort must be deconstructed for her. An apple will never again be just an apple- it will become a noun, a subject, or an object. Nothing will be left untouched, all must be ripped apart and explained back to the child in Matrixese.
We are taught almost nothing useful during the twelve or so years that we are institutionalized and conditioned for slavery- not how to cook, farm, hunt, build, gather, laugh or play. We are only taught how to live by a clock and conform to institutionalized behaviors that make for solid careers as slaveocrats.
The state apparatus is based on law, which is a contract between the people and an organism created to administer common necessities- an exchange of sovereignty between the people and the state. This sounds reasonable, but when one looks at the mass slaughters of the 20th century, almost without exception, the perpetrators are the states themselves.
Money is their most brilliant accomplishment. Billions of people spend most of their waking lives either acquiring it or spending it without ever understanding what it actually is. In this hologram of a world, the only thing one can do without money is breath. For almost every other human activity they want currency, from eating and drinking to clothing oneself and finding a partner. Religion came from innate spirituality and patriotism from the tribe, but money they invented themselves- the most fantastic and effective of all their tools of domestication.
Escaping the Grip of Control
How can this awakening be explained? How do you describe the feeling of swimming in the ocean at dusk to someone who has never even seen the sea? You can't, but what you can do is crack open a window for them and if enough windows are opened, the illusion begins to lose its luster.
If you make more than $27,520 a year at your job, you are doing better than half the country is. But you don't have to take my word for it, you can check out the latest wage statistics from the Social Security administration right here. But of course $27,520 a year will not allow you [...]
By the end of this decade, the U.S. Army will be the smallest that it has been since World War II, the U.S. Navy will be the smallest that it has been since World War I, and the U.S. Air Force will have gotten nearly 500 planes smaller. Without a doubt, the U.S. military has [...]
Part I: The Wolves of Psycho Street: America’s Economic Enslavement by the Psychopathic Corporate...
Global Capitalism Has Written Off The Human Race Paul Craig Roberts Economic theory teaches that free price and profit movements ensure that capitalism produces the greatest welfare for the greatest number. Losses indicate economic activities where costs exceed the value of production, thus investment in these activities is curtailed. Profits indicate economic activities where the…
The post Global Capitalism Has Written Off The Human Race — Paul Craig Roberts appeared first on PaulCraigRoberts.org.
How Junk Economists Help The Rich Impoverish The Working Class Paul Craig Roberts Last week, I explained how economists and policymakers destroyed our economy for the sake of short-term corporate profits from jobs offshoring and financial deregulation. http://www.paulcraigroberts.org/2014/01/25/economists-policymakers-murdered-economy-paul-craig-roberts/ That same…
The post How Junk Economists Help The Rich Impoverish The Working Class — Paul Craig Roberts appeared first on PaulCraigRoberts.org.
GE Engineers and American Government Officials Warned of Dangerous Nuclear Design 5 of the 6 nuclear reactors at Fukushima are General Electric Mark 1 reactors. GE knew decades ago that the desisangn was faulty. ABC News reported in 2011: Thirty-five … Continue reading →
Amazon, Applebee’s and Google’s Job-Crushing Drones and Robot Armies: They’re Coming for Your Job...
The most important website last weekend and in weeks to come — on which the hopes and fears of countless Americans are focused (and the President’s poll-ratings depend) – is not HealthCare.gov. It’s Amazon.com.
Even if and when HealthCare.gov works perfectly, relatively few Americans will be affected by it. Only 5 percent of us are in the private health-insurance market to begin with. But almost half of Americans are now shopping for great holiday deals online, and many will be profoundly affected — not because they get great deals, but because their jobs and incomes are at stake.
Online retailing is the future. Amazon is the main online shopping portal this holiday season but traditional retailers are moving online as fast as they can. Online sales are already up 20 percent over last year, and the pace will only accelerate.Target and many other bricks-and-mortar outlets plan to spend more on technology next year than on building and upgrading new stores.
Americans are getting great deals online, and they like the convenience. But there’s a hidden price. With the growth of online retailing, fewer Americans will have jobs in bricks-and-mortar retail stores.
Amazon announced last summer it would add 5,000 new jobs to the 20,000 it already has. But not even 25,000 Amazon jobs come near to replacing the hundreds of thousands of retail jobs Amazon has already wiped out, and the hundreds of thousands more it will eliminate in the future.
To put this in some perspective you need to know that retail jobs have been the fastest growing of all job categories since the recession ended in 2009. But given the rapid growth of online retailing, that trend can’t possibly last. What will Americans do when online sales take over?
Add to this the fact that most of what’s being sold this holiday season – online and off-line — is no longer made by Americans. Vast shipping containers of gadgets, garments, and other goodies are fabricated or assembled or sewed together in Asia for the American market.
Online retailers are facilitating this move by having these goods shipped directly from Asian factories to distribution centers in America and then to our homes, without ever having to go to an American retail store or even a wholesaler. This means even lower prices and better deals. But it also means fewer jobs and lower pay for many Americans.
Some manufacturing is coming back to America, to be sure, but not the assembly-line jobs that used to be the core of manufacturing employment. Computerized machine tools and robots are doing an increasing portion of the work — which is why many companies can afford to bring their factories back here.
Get it? Technology and globalization are driving the good deals American consumers are getting this holiday season. But the same forces are keeping wages down, and are even on the verge of eliminating many of the low-wage retail and related service jobs many Americans now need to make ends meet.
To put it another way, American consumers getting great shopping deals are also American workers on the losing end of those same deals.
The biggest reason holiday shopping is especially frenzied this season is so many Americans are already stretched to the breaking point that they’re more desperate than ever for bargains. Sixty-five percent of today’s shoppers are living paycheck-to-paycheck. That’s up from 61 percent last year, according to consumer research by Booz and Company.
Median household income in America continues to drop, adjusted for inflation, because low-wage jobs are the major ones available. Lower-wage occupations accounted for only 21 percent of job losses during the Great Recession. They’ve accounted for 58 percent of all job growth since then.
The President’s dropping poll-ratings are only partly due to the bumbling roll-out of the Affordable Care Act. The computer glitches at HealthCare.gov aren’t the most important reason why Americans are grumpy this holiday season. The bigger problem is the economy remains lousy for most people.
Technology and globalization are taking over more and more American jobs. There’s no easy fix for this, and it’s hardly the President’s fault. But the sobering reality is the United States has no national strategy for creating more good jobs in America. Until we do, more and more Americans will be chasing great deals that come largely at their own expense.
Al-Jazeera – 28 November 2013
There are many things to fear in Gaza: Attacks from Israel’s Apache helicopters and F-16 fighter jets, the coastal enclave’s growing isolation, the regular blackouts from power shortages, increasingly polluted drinking water and rivers of sewage flooding the streets.
Meanwhile, for most Palestinians in Gaza the anxiety-inducing soundtrack to their lives is the constant buzz of the remotely piloted aircraft – better known as “drones” – that hover in the skies above.
Drones are increasingly being used for surveillance and extra-judicial execution in parts of the Middle East, especially by the US, but in nowhere more than Gaza has the drone become a permanent fixture of life. More than 1.7 million Palestinians, confined by Israel to a small territory in one of the most densely populated areas in the world, are subject to near continual surveillance and intermittent death raining down from the sky.
There is little hope of escaping the zenana – an Arabic word referring to a wife’s relentless nagging that Gazans have adopted to describe the drone’s oppressive noise and their feelings about it. According to statistics compiled by human rights groups in Gaza, civilians are the chief casualties of what Israel refers to as “surgical” strikes from drones.
“When you hear the drones, you feel naked and vulnerable,” said Hamdi Shaqura, deputy director of the Palestinian Centre for Human Rights, based in Gaza City. “The buzz is the sound of death. There is no escape, nowhere is private. It is a reminder that, whatever Israel and the international community assert, the occupation has not ended. We are still living completely under Israeli control. They control the borders and the sea and they decide our fates from their position in the sky,” said Shaqura.
The Israeli military did not respond to Al Jazeera’s requests for comment.
Suffer the children
The sense of permanent exposure, coupled with the fear of being mistakenly targeted, has inflicted deep psychological scars on civilians, especially children, according to experts.
“There is a great sense of insecurity. Nowhere feels safe for the children, and they feel no one can offer them protection, not even their parents,” said Ahmed Tawahina, a psychologist running clinics in Gaza as part of the Community Mental Health Programme. “That traumatises both the children and parents, who feel they are failing in their most basic responsibility.”
Shaqura observed: “From a political perspective, there is a deep paradox. Israel says it needs security, but it demands it at the cost of our constant insecurity.”
There are no statistics that detail the effect of the drones on Palestinians in Gaza. Doctors admit it is impossible to separate the psychological toll inflicted by drones from other sources of damage to mental health, such as air strikes by F-16s, severe restrictions on movement and the economic insecurity caused by Israel’s blockade.
But field researchers working for Palestinian rights groups point out that the use of drones is intimately tied to these other sources of fear and anxiety. Drones fire missiles themselves, they guide attacks by F-16s or helicopters, and they patrol and oversee the borders.
A survey in medical journal The Lancet following Operation Cast Lead, Israel’s month-long attack on Gaza in winter 2008-09, found large percentages of children suffered from symptoms of psychological trauma: Fifty-eight percent permanently feared the dark; 43 percent reported regular nightmares; 37 percent wet the bed and 42 percent had crying attacks.
Tawahina described the sense of being constantly observed as a “form of psychological torture, which exhausts people’s mental and emotional resources. Among children at school, this can be seen in poor concentration and unruly behaviour.” The trauma for children is compounded by the fact that the drones also disrupt what should be their safest activity – watching TV at home. When a drone is operating nearby, it invariably interferes with satellite reception.
“”It doesn’t make headlines, but it is another example of how there is no escape from the drones. Parents want their children indoors, where it feels safer and where they’re less likely to hear the drones, but still the drone finds a way into their home. The children cannot even switch off from the traumas around them by watching TV because of the drones.”
Israel’s ‘major advantage’
Israel developed its first drones in the early 1980s, during its long occupation of south Lebanon, to gather aerial intelligence without exposing Israeli pilots to anti-aircraft missiles. Efraim Inbar, director of the Begin-Sadat Centre for Strategic Studies at Bar Ilan University, said drones help in situations where good, on-the-ground intelligence is lacking. “What the UAV gives you is eyes on the other side of the hill or over the border,” he said. “That provides Israel with a major advantage over its enemies.”
Other Israeli analysts have claimed that the use of drones, with their detailed intelligence-collecting abilities, is justified because they reduce the chances of errors and the likelihood of “collateral damage” – civilian deaths – during attacks.
But, according to Inbar, the drone is no better equipped than other aircraft for gathering intelligence or carrying out an execution.
“The advantage from Israel’s point of view is that using a drone for these tasks reduces the risk of endangering a pilot’s life or losing an expensive plane. That is why we are moving towards much greater use of these kinds of robots on the battlefield,” he said.
‘Mistakes can happen’
According to Gaza human rights group al-Mezan, Israel started using drones over the territory from the start of the second intifada in 2000, but only for surveillance.
Israel’s first extra-judicial executions using drones occurred in 2004, when two Palestinians were killed. But these operations greatly expanded after 2006, in the wake of Israel’s withdrawal of settlers and soldiers from Gaza and the rise to power of the Palestinian Islamic movement Hamas.
Drones, the front-line weapon in Israel’s surveillance operations and efforts to foil rocket attacks, killed more than 90 Palestinians in each of the years 2006 and 2007, according to al-Mezan. The figures soared during Operation Cast Lead and in its aftermath, with 461 Palestinians killed by drones in 2009. The number peaked again with 199 deaths in 2012, the year when Israel launched the eight-day Operation Pillar of Defence against Gaza.
Despite Israeli claims that the intelligence provided by drones makes it easier to target those Palestinians it has defined as “terrorists”, research shows civilians are the main victims. In the 2012 Pillar of Defence operation, 36 of the 162 Palestinians killed were a result of drone strikes, and a further 100 were injured by drones. Of those 36 killed, two-thirds were civilians.
Also revealing was a finding that, although drones were used in only five percent of air strikes, they accounted for 23 percent of the total deaths during Pillar of Defence. According to the Economist magazine, the assassination of Hamas leader Ahmed Jabari, which triggered that operation, was carried out using a Hermes 450 drone.
Palestinian fighters report that they have responded to the constant surveillance by living in hiding, rarely going outdoors and avoiding using phones or cars. It is a way of life not possible for most people in Gaza.
Gaza’s armed groups are reported to be trying to find a way to jam the drones’ navigation systems. In the meantime, Hamas has claimed it has shot down three drones, the latest this month, though Israel says all three crashed due to malfunctions.
Last week, on the anniversary of the launch of Pillar of Defence, an Israeli commander whose soldiers control the drones over Gaza from a base south of Tel Aviv told the Haaretz newspaper that “many” air strikes during the operation had involved drones. “Lt Col Shay” was quoted saying: “Ultimately, we are at war. As much as the IDF strives to carry out the most precise surgical strikes, mistakes can happen in the air or on the ground.”
Random death by drone
It is for this reason that drones have become increasingly associated with random death from the sky, said Samir Zaqout, a senior field researcher for Al-Mezan.
“We know from the footage taken by drones that Israel can see what is happening below in the finest detail. And yet women and children keep being killed in drone attacks. Why the continual mistakes? The answer, I think, is that these aren’t mistakes. The message Israel wants to send us is that there is no protection whether you are a civilian or fighter. They want us afraid and to make us turn on the resistance [Palestinian fighters].”
Zaqout also points to a more recent use of drones – what has come to be known as “roof-knocking”. This is when a drone fires small missiles at the roof of a building to warn the inhabitants to evacuate – a practice Israel developed during Operation Cast Lead three years earlier, to allay international concerns about its repeated levellings of buildings with civilians inside.
In Pillar of Defence in 2012, 33 buildings were targeted by roof-knocking.
Israel says it provides 10 minutes’ warning from a roof-knock to an air strike, but, in practice, families find they often have much less time. This, said Zaqout, puts large families in great danger as they usually send their members out in small groups to be sure they will not be attacked as they move onto the streets.
One notorious case occurred during Cast Lead, when six members of the Salha family, all women and children, were killed when their home was shelled moments after a roof-knocking. The father, Fayez Salha, who survived, lost a case for damages in Israel’s Supreme Court last February and was ordered to pay costs after the judges ruled that the attack was legitimate because it occurred as part of a military operation.
A US citizen who has lived long-term in Gaza, who wished not be named for fear of reprisals from Israel, said she often heard the drones at night when the street noise dies down, or as they hover above her while out walking. “The sound is like the buzz of a mosquito, although there is one type of drone that sometimes comes into view that is silent,” she said.
She added that she knew of families that, before moving into a new apartment building, checked to see whether it housed a fighter or a relative of a fighter, for fear that the building may be attacked by Israel.
Shaqura said the drones inevitably affect one’s day-to-day behaviour. He said he was jogging early one morning while a drone hovered overhead.
“I got 100 metres from my front door when I started to feel overwhelmed with fear. I realised that my tracksuit was black, the same colour as many of the fighters’ uniforms. I read in my work too many reports of civilians being killed by drones not to see the danger. So I hurried back home.”
Would you like to surf the Internet, make a phone call or send a text message using only your brain? Would you like to “download” the content of a 500 page book into your memory in less than a second? Would you like to have extremely advanced nanobots constantly crawling around in your body monitoring it for disease? Would you like to be able to instantly access the collective knowledge base of humanity wherever you are? All of that may sound like science fiction, but these are technologies that some of the most powerful high tech firms in the world actually believe are achievable by the year 2020. However, with all of the potential “benefits” that such technology could bring, there is also the potential for great tyranny. Just think about it. What do you think that the governments of the world could do if almost everyone had a mind reading brain implant that was connected to the Internet? Could those implants be used to control and manipulate us? Those are frightening things to consider.
For now, most of the scientists that are working on brain implant technology do not seem to be too worried about those kinds of concerns. Instead, they are pressing ahead into realms that were once considered to be impossible.
Right now, there are approximately 100,000 people around the world that have implants in their brains. Most of those are for medical reasons.
But this is just the beginning. According to the Boston Globe, the U.S. government plans “to spend more than $70 million over five years to jump to the next level of brain implants”.
This new project is being called the Systems-Based Neurotechnology for Emerging Therapies (SUBNETS), and the goal is to be able to monitor the “mental health” of soldiers and veterans. The following is how a recent CNET article described SUBNETS…
SUBNETS is inspired by Deep Brain Stimulation (DBS), a surgical treatment that involves implanting a brain pacemaker in the patient’s skull to interfere with brain activity to help with symptoms of diseases like epilepsy and Parkinson’s. DARPA’s device will be similar, but rather than targeting one specific symptom, it will be able to monitor and analyse data in real time and issue a specific intervention according to brain activity.
This kind of technology is being developed by the private sector as well. In fact, according to Scientific American scientists are becoming increasingly excited about how brain implants can be used to “reboot” the brains of people with depression…
Psychological depression is more than an emotional state. Good evidence for that comes from emerging new uses for a technology already widely prescribed for Parkinson’s patients. The more neurologists and surgeons learn about the aptly named deep brain stimulation, the more they are convinced that the currents from the technology’s implanted electrodes can literally reboot brain circuits involved with the mood disorder.
Would you like to have your brain “rebooted” by a chip inside your head?
And of course this is how brain implants will be marketed to the public at first. They will be sold as something that has great “health benefits”. For example, one firm has developed a brain implant that can detect and treat epileptic seizures…
The NeuroPace RNS is the first implant to listen to brain waves and autonomously decide when to apply a therapy to prevent an epileptic seizure. It was developed by a company with a staff of less than 90 people, only about 30 on the core electronic, mechanical, and software engineering teams.
A different team of researchers has discovered that it can stimulate the repair of brain tissue in rats using brain implants…
Stroke and Parkinson’s Disease patients may benefit from a controversial experiment that implanted microchips into lab rats. Scientists say the tests produced effective results in brain damage research.
Rats showed motor function in formerly damaged gray matter after a neural microchip was implanted under the rat’s skull and electrodes were transferred to the rat’s brain. Without the microchip, rats with damaged brain tissue did not have motor function. Both strokes and Parkinson’s can cause permanent neurological damage to brain tissue, so this scientific research brings hope.
Most of us won’t need brain implants for medical reasons though.
So how will they be marketed to the rest of us?
Well, what if you were told that they could give you “super powers”?
Would you want a brain implant then?
The following is a short excerpt from a recent Scientific American article…
Our world is determined by the limits of our five senses. We can’t hear pitches that are too high or low, nor can we see ultraviolet or infrared light—even though these phenomena are not fundamentally different from the sounds and sights that our ears and eyes can detect. But what if it were possible to widen our sensory boundaries beyond the physical limitations of our anatomy? In a study published recently in Nature Communications, scientists used brain implants to teach rats to “see” infrared light, which they usually find invisible. The implications are tremendous: if the brain is so flexible it can learn to process novel sensory signals, people could one day feel touch through prosthetic limbs, see heat via infrared light or even develop a sixth sense for magnetic north.
And some very prominent Internet firms simply take it for granted that most of us will eventually have brain implants that connect us directly to the Internet…
Google has a plan. Eventually it wants to get into your brain. “When you think about something and don’t really know much about it, you will automatically get information,” Google CEO Larry Page said in Steven Levy’s book, “In the Plex: How Google Thinks, Works and Shapes Our Lives.” “Eventually you’ll have an implant, where if you think about a fact, it will just tell you the answer.”
At this point you might be thinking that this will never happen because getting a brain implant is a very complicated and expensive procedure.
Well, according to an article in the Wall Street Journal, that is not actually true. In fact, the typical procedure is very quick and often only requires just an overnight stay in the hospital…
Neural implants, also called brain implants, are medical devices designed to be placed under the skull, on the surface of the brain. Often as small as an aspirin, implants use thin metal electrodes to “listen” to brain activity and in some cases to stimulate activity in the brain. Attuned to the activity between neurons, a neural implant can essentially “listen” to your brain activity and then “talk” directly to your brain.
If that prospect makes you queasy, you may be surprised to learn that the installation of a neural implant is relatively simple and fast. Under anesthesia, an incision is made in the scalp, a hole is drilled in the skull, and the device is placed on the surface of the brain. Diagnostic communication with the device can take place wirelessly. When it is not an outpatient procedure, patients typically require only an overnight stay at the hospital.
In the future, the minds of most people could potentially be connected to the Internet 24 hours a day. Imagine sending an email or answering your phone by just thinking about it. According to the New York Times, this is where we are eventually heading…
Soon, we might interact with our smartphones and computers simply by using our minds. In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty.
Researchers in Samsung’s Emerging Technology Lab are testing tablets that can be controlled by your brain, using a cap that resembles a ski hat studded with monitoring electrodes, the MIT Technology Review, the science and technology journal of the Massachusetts Institute of Technology, reported this month.
The technology, often called a brain computer interface, was conceived to enable people with paralysis and other disabilities to interact with computers or control robotic arms, all by simply thinking about such actions. Before long, these technologies could well be in consumer electronics, too.
So how far away is such technology?
According to a Computer World UK article, Intel believes that they will have Internet-connected brain implants in people’s heads by the year 2020…
By the year 2020, you won’t need a keyboard and mouse to control your computer, say Intel researchers. Instead, users will open documents and surf the web using nothing more than their brain waves.
Scientists at Intel’s research lab in Pittsburgh are working to find ways to read and harness human brain waves so they can be used to operate computers, television sets and cell phones. The brain waves would be harnessed with Intel-developed sensors implanted in people’s brains.
The scientists say the plan is not a scene from a sci-fi movie, Big Brother won’t be planting chips in your brain against your will. Researchers expect that consumers will want the freedom they will gain by using the implant.
And that would only be the tip of the iceberg. Futurist Ray Kurzweil is actually convinced that we will all eventually have hordes of nanobots running around our bodies monitoring our health and looking for disease…
‘Bridge two (is) the biotechnology revolution, where we can reprogram biology away from disease.
‘And that is not the end-all either.
‘Bridge three is to go beyond biology, to the nanotechnology revolution.
‘At that point we can have little robots, sometimes called nanobots, that augment your immune system.
‘We can create an immune system that recognizes all disease, and if a new disease emerged, it could be reprogrammed to deal with new pathogens.’
Such robots, according to Kurzweil, will help fight diseases, improve health and allow people to remain active for longer.
Are you ready for this kind of a future?
These technologies are being developed right now, and they will be enthusiastically adopted by a large segment of the general public.
At some point in the future, having a brain implant may be as common as it is to use a smart phone today.
And of course the mainstream media will be telling all of us how wonderful it is to have a brain implant. If you doubt this, just check out the following NBC News report where we are all told that we can expect to have microchip implants by the year 2017…
So are you ready for this brave new world?
Will you ever let them put a chip in your head?
Please share your thoughts by posting a comment below…
Instantly Add To The Conversation Using Facebook Comments
Israel’s “New Army” Will Use “Technological Advantages” Including More Drones in Future Middle East...
Former Vice President and Nobel laureate Al Gore during his speech on energy at Constitution Hall in Washington, Thursday, July 17, 2008. (Photo: Brendan Smialowski / The New York Times) Denied the presidency by the United States Supreme Court (in a 5-4) vote, Al Gore became a Jeremiah for awhile during the worst of the Bush years. Generally, the mainstream media ignored him or derided him, even as he spoke truth to power about the War in Iraq and the threats to democracy.
Since then he's become an apostle about the crisis of climate change, an entrepreneur, and a visionary.
In his latest book, "The Future: Six Drivers of Global Change," Gore offers a futurist manifesto. As with most chroniclers of the human condition and the debate over the challenges ahead for our species, Gore has his advocates and detractors.
Gore considers himself an optimist, as he notes at the end of his introduction:
"Indeed, I am an optimist—-though my optimism is predicated on the hope that we will find ways to see and think clearly about the obvious trends that are even now gaining momentum, that we will reason together and attend to the dangerous distortions in our present ways of describing and measuring the powerful changes that are now under way, that we will actively choose to preserve human values and protect them, not least against the mechanistic and destructive consequences of our baser instincts that are now magnified by technologies more powerful than any that those in previous generations, even Jules Verne, could have imagined."
When you read that Gore hopes that "we will reason together," you can tell that he has been away from Washington DC for a long, long time.
But his book is not about the circus of politics; it is about the survival of the planet and the co-existence of people that transcends national boundaries.
Support Truthout's mission. "The Future: Six Drivers of Global Change" is yours with a minimum donation to Truthout of $40 (which includes shipping and handling) or a monthly donation of $15. Click here.
The following excerpt is the introduction to "The Future: Six Drivers of Global Change":
Like many fulfilling journeys, this book began not with answers but with a question. Eight years ago, when I was on the road, someone asked me: “What are the drivers of global change?” I listed several of the usual suspects and left it at that. Yet the next morning, on the long plane flight home, the question kept pulling me back, demanding that I answer it more precisely and accurately—-not by relying on preconceived dogma but by letting the emerging evidence about an emerging world take me where it would. The question, it turned out, had a future of its own. I started an outline on my computer and spent several hours listing headings and subheadings, then changing their rank order and relative magnitude, moving them from one category to another and filling in more and more details after each rereading.
As I spent the ensuing years raising awareness about climate change and pursuing a business career, I continued to revisit, revise, and sharpen the outline until finally, two years ago, I concluded that it would not leave me alone until I dug in and tried to thoroughly answer the question that had turned into something of an obsession.
What emerged was this book, a book about the six most important drivers of global change, how they are converging and interacting with one another, where they are taking us, and how we as human beings—-and as a global civilization—-can best affect the way these changes unfold. In order to reclaim control of our destiny and shape the future, we must think freshly and clearly about the crucial choices that confront us as a result of:
• The emergence of a deeply interconnected global economy that increasingly operates as a fully integrated holistic entity with a completely new and different relationship to capital flows, labor, consumer markets, and national governments than in the past;
• The emergence of a planet--wide electronic communications grid connecting the thoughts and feelings of billions of people and linking them to rapidly expanding volumes of data, to a fast growing web of sensors being embedded ubiquitously throughout the world, and to increasingly intelligent devices, robots, and thinking machines, the smartest of which already exceed the capabilities of humans in performing a growing list of discrete mental tasks and may soon surpass us in manifestations of intelligence we have always assumed would remain the unique province of our species;
• The emergence of a completely new balance of political, economic, and military power in the world that is radically different from the equilibrium that characterized the second half of the twentieth century, during which the United States of America provided global leadership and stability—-shifting influence and initiative from West to East, from wealthy countries to rapidly emerging centers of power throughout the world, from nation--states to private actors, and from political systems to markets;
• The emergence of rapid unsustainable growth—-in population; cities; resource consumption; depletion of topsoil, freshwater supplies, and living species; pollution flows; and economic output that is measured and guided by an absurd and distorted set of universally accepted metrics that blinds us to the destructive consequences of the self--deceiving choices we are routinely making;
• The emergence of a revolutionary new set of powerful biological, biochemical, genetic, and materials science technologies that are enabling us to reconstitute the molecular design of all solid matter, reweave the fabric of life itself, alter the physical form, traits, characteristics, and properties of plants, animals, and people, seize active control over evolution, cross the ancient lines dividing species, and invent entirely new ones never imagined in nature; and
• The emergence of a radically new relationship between the aggregate power of human civilization and the Earth’s ecological systems, including especially the most vulnerable—-the atmosphere and climate balance upon which the continued flourishing of humankind depends—-and the beginning of a massive global transformation of our energy, industrial, agricultural, and construction technologies in order to reestablish a healthy and balanced relationship between human civilization and the future.
This book is data--driven and is based on deep research and reporting—-not speculation, alarmism, naïve optimism, or blue--sky conjecture. It represents the culmination of a multiyear effort to investigate, decipher, and present the best available evidence and what the world’s leading experts tell us about the future we are now in the process of creating.
There is a clear consensus that the future now emerging will be extremely different from anything we have ever known in the past. It is a difference not of degree but of kind. There is no prior period of change that remotely resembles what humanity is about to experience. We have gone through revolutionary periods of change before, but none as powerful or as pregnant with the fraternal twins—-peril and opportunity—-as the ones that are beginning to unfold. Nor have we ever experienced so many revolutionary changes unfolding simultaneously and converging with one another.
This is not a book primarily about the climate crisis, though the climate crisis is one of the six emergent changes that are quickly reshaping our world, and its interaction with the other five drivers of change has revealed to me new ways to understand it. Nor is it primarily about the degradation of democracy in the United States and the dysfunctionality of governance in the world community—-though I continue to believe that these leadership crises must be resolved in order for humankind to reclaim control of our destiny. Indeed all six of these emergent revolutionary changes are threatening to overtake us at a moment in history when there is a dangerous vacuum of global leadership.
Neither is this a manifesto intended to lay the groundwork for some future political campaign. I have run for political office often enough in the past. The joke I often use to deflect questions about whether I have finally surrendered any intention to do so again is actually as close to the truth as any words I can summon in describing my attitude toward politics: I am a recovering politician and the chances of a relapse have been diminishing for long enough to increase my confidence that I will not succumb to that temptation again. In the Conclusion, however, you will find a recommended agenda for action that is based on the analysis in this book.
A New Law of Nature
As a young freshman member of the U.S. House of Representatives elected in 1976, I joined a new bipartisan group of congressmen and senators known as the Congressional Clearinghouse on the Future, founded by the late Charlie Rose of North Carolina. In my second term, Rose asked me to succeed him as chair of the group. We organized workshops on the implications of new technologies and scientific discoveries and met with leaders in business and science. Among our other initiatives, we persuaded all 200 subcommittees in the Congress to publish a list of the most important issues they expected to emerge over the following twenty years and published it as “The Future Agenda.” Most of all, we studied emerging trends and met regularly with the leading thinkers about the future: Daniel Bell, Margaret Mead, Buckminster Fuller, Carl Sagan, Alvin Toffler, John Naisbitt, Arno Penzias, and hundreds of others.
The visiting scholar who made perhaps the biggest impression on me was a short and balding scientist born in Russia a few months before the 1917 Revolution but educated in Belgium: Ilya Prigogine, who had just won the Nobel Prize in Chemistry for his discovery of a major corollary to the Second Law of Thermodynamics.
Entropy, according to the Second Law, causes all isolated physical systems to break down over time and is responsible for irreversibility in nature. For a simple example of entropy, consider a smoke ring: it begins as a coherent donut with clearly defined boundaries. But as the molecules separate from one another and dissipate energy into the air, the ring falls apart and disappears. All so--called closed systems are subject to the same basic process of dissolution; in some, entropy operates quickly, while in others the process takes more time.
Prigogine’s discovery was that an open system—-that is, a system that imports flows of energy from outside the system into it, through it, and out again—-not only breaks down, but as the flow of energy continues, the system then reorganizes itself at a higher level of complexity. In a sense, the phenomenon described by Prigogine is the opposite of entropy. Self--organization, as a law of nature and as a process of change, is truly astonishing. What it means is that complex new forms can emerge spontaneously through self--organization.
Consider the increased flows of information throughout the world following the introduction of the Internet and the World Wide Web. Elements of the old information pattern began to break down. Many newspapers went bankrupt, readership sharply declined in most others, bookstores consolidated and closed. Many business models became obsolete. But the new emergent pattern led to the self--organization of thousands of new business models, and volumes of online communication dwarfing those that characterized the world of the printing press.
The Earth itself, when viewed as a whole, is also an open system. It imports energy from the sun that flows into and through the elaborate patterns of energy transfer that make up the Earth system, including the oceans, the atmosphere, the various geochemical processes—-and life itself. The energy then flows from the Earth back into the universe surrounding it as heat energy in the form of infrared radiation.
The essence of the emergent crisis of global warming is that we are importing enormous amounts of energy from the crust of the Earth and exporting entropy (that is, progressive disorder) into the previously stable, though dynamic, ecological systems upon which the continued flourishing of civilization depends. These new flows of energy, originally imported to the Earth from the sun ages ago, have been stabilized underground for millions of years as inert deposits of carbon.
By mobilizing them and injecting the waste products from their combustion into the atmosphere, we are breaking down the stable climate pattern that has persisted since not long after the end of the last Ice Age ten millennia ago. This was not long before the first cities and the beginning of the Agricultural Revolution, which began to spread in the valleys of the Nile, Tigris, Euphrates, Indus, and Yellow rivers 8,000 years ago after Stone Age women and men patiently picked and selectively bred the plant varieties on which our modern diet still depends. In the process, we are forcing the emergence of a new climate pattern very different from the one to which our entire civilization is tightly configured and within which we have thrived.
While Prigogine’s discovery of this new law of nature may seem arcane, its implications for the way we should think about the future are profound. The modern meaning of the word “emergence,” and the entire field of knowledge known as complexity theory, are both derived from Prigogine’s work. The motivation for his exploration of emergence was his passion for understanding how the future becomes irreversibly different from the past. He wrote that, “given my interest in the concept of time, it was only natural that my attention was focused on . . . the study of irreversible phenomena, which made so manifest the ‘arrow of time.’ ”
The History of the Future
The way we think about the future has a past. Throughout the history of human civilization, every culture has had its own idea of the future. In the words of an Australian futurist, Ivana Milojevi´c, “Although the conception of time and the future exist universally, they are understood in different ways in different societies.” Some have assumed that time is circular and that past, present, and future are all part of the same recurring cycle. Others have believed that the only future that matters is in the afterlife.
The crushing disappointments that are so often part of the human condition have sometimes led to crises of confidence in the future, replacing hope with despair. But most have learned from their life experiences and the stories told by their elders that what we do in the present, when informed by knowledge of the past, can shape the future in objectively better ways.
Anthropologists tell us of evidence dating back almost 50,000 years of humans trying to divine the future with the help of oracles or mediums. Some attempted to see into the future by reading clues to the unfolding patterns of life in the entrails of animals sacrificed to the gods, by studying the movements of fish, by interpreting marks on the Earth, or in any of a hundred other ways. Some still read the patterns of palms or Tarot cards for the same purpose. The implicit assumption in such searches is that all reality is of one fabric encompassing past, present, and future, according to a design whose meaning can be divined from particular portions of the whole and applied to other parts of the fabric in order to interpret the unfolding future.
Doctors and scientists now divine clues about the future of individuals from the pattern of DNA that is found in every cell. Mathematicians discern the nature of fractal equations—-and the geometric forms derived from them—-by observing the “self--sameness” of the patterns they manifest at every level of resolution. Holographic images are contained in their entirety in each molecule of the gaseous cylinders onto which the emergent larger image is projected.
According to historians, astrologers of ancient Babylon used a double clock—-one for measuring the timescale of human affairs, and another for tracking the celestial movements they believed had an influence on earthly events. In divining our own future, we too must now pay due attention to a double clock. There is the one that measures our hours and days, and the other that measures the centuries and millennia over which our disruptions of the Earth’s natural systems will continue to occur.
Even as teams of scientists race against the clock to compete with other teams in making new genetic discoveries that may cure diseases and lay the foundation for multibillion--dollar products, we must consult another clock that measures the timescales over which evolution operates—-because the emergent capabilities bursting forth from the revolutionary advances in the life sciences are about to make us the principal agent of evolution.
Because of the new power that seven billion of us collectively wield with our new technologies, voracious consumption, and outsized economic dynamism, some of the ecological changes that we are setting in motion are going to unfold, the scientists tell us, in geologic time, measured by a planetary clock that tracks timespans that strain the limits of human imagination. Roughly a quarter of the 90 million tons of global warming pollution we put into the atmosphere each day will still linger there—-still trapping heat—-more than 10,000 years from now.
Consequently, in reconciling the difference between what “is” and what “ought to be,” we are faced with an existential conundrum. Though we have great difficulty conceiving of geologic time, we have nevertheless become a geologic force; though we cannot imagine evolutionary timescales, we are nevertheless becoming the chief force behind evolution.
The idea that human history is characterized by progress from one era to the next is not, as some have long thought, an invention of the Enlightenment. The explosion of philosophy in ancient Greece marked the beginning of recorded contemplations about the future of humankind. In the fourth century bce, Plato wrote about progress as “a continuous process, which improves the human condition from its original state of nature to higher and higher levels of culture, economic organization and political structure towards an ideal state. Progress flows from the growing complexity of society and the need to enlarge knowledge, through the development of sciences and arts.”
In the fourth century ce, St. Augustine, who frequently quoted Plato, wrote, “The education of the human race, represented by the people of God, has advanced, like that of an individual, through certain epochs, or, as it were, ages, so that it might gradually rise from earthly to heavenly things, and from the visible to the invisible.”
Nor is progress exclusively a Western invention. Many interpret the Tao of ancient China as a guide for those who wish to progress as they make their way forward in the world—-though its conception of progress is very different from what emerged in the West. The eleventh--century Islamic philosopher Muhammad al--Ghazali wrote that Islam teaches that “Sincere accomplished work towards progress and development is, therefore, an act of religious worship and is rewarded as such. The end result will be a serious, scrupulous and perfect work, true scientific progress and hence actual achievement of balanced and comprehensive development.”
At the beginning of the Renaissance, the rediscovery of the Aristotelian branch of ancient Greek philosophy—-which had been preserved in Alexandria in Arabic and reintroduced to Europe in Al--Andalus—-contributed to a fascination with the physical as well as the philosophical legacies of both Athens and Rome. The legacies of that recovered past nourished dreams that would find fruition in the Enlightenment, when a strong consensus emerged that secular progress is the dominant pattern in human history.
The discoveries of Copernicus, Galileo, Descartes, Newton, and the others who launched the Scientific Revolution helped to ignite a belief that, whatever God’s role or plan, the growth of knowledge made progress in human societies inevitable. Francis Bacon, who more than any other emphasized the word “progress” in describing humanity’s journey into the future, was also among the first to write about human progress with a special emphasis on subduing, dominating, and controlling nature—as if we were as separate from nature as Descartes believed the mind was separate from the body.
Centuries later, this philosophical mistake is still in need of correction. By tacitly assuming our own separateness from the ecological system of the planet, we are frequently surprised by phenomena that emerge from our inextricable connections to it. And as the power of our civilization grows exponentially, these surprises are becoming increasingly unpleasant.
The cultural legacy that still influences the scientific method is reductionist - that is, by dividing and endlessly subdividing the objects of our research and analysis, we separate interconnected phenomena and processes to develop specialized expertise. But the focusing of attention on ever narrower slices of the whole often comes at the expense of attention to the whole, which can cause us to miss the significance of emergent phenomena that spring unpredictably from the interconnections and interactions among multiple processes and networks. That is one reason why linear projections of the future are so often wrong.
A New Vision of the Past and the Future
The invention of powerful new tools and the development of potent new insights - and the discovery of rich new continents - led to exciting new ways of seeing the world and expansive optimism about the future. In the seventeenth century, the father of microbiology, Antonie van Leeuwenhoek, fashioned new lenses for the microscope (which itself had been invented in Holland less than a century earlier), and by looking through them discovered cells and bacteria. Simultaneously, his close friend in Delft, Johannes Vermeer, revolutionized portraiture with the use (most art historians agree) of the camera obscura, made possible by the new understanding of optics.
As the Scientific Revolution accelerated and the Industrial Revolution began, the idea of progress shaped prevailing conceptions of the future. In the years before his death, Thomas Jefferson wrote about the progress he had witnessed in his life and noted, “And where this progress will stop no one can say. Barbarism has, in the meantime, been receding before the steady step of amelioration, and will in time, I trust, disappear from the earth.”
Four years after Jefferson’s death, the publication by Charles Lyell of his masterwork, Principles of Geology, in 1830, profoundly disrupted the long prevailing view of humanity’s relationship to time. In the Judeo--Christian world especially, most had assumed that the Earth was only a few thousand years old, and that humans were created not long after the planet itself, but Lyell amply proved that the Earth was not thousands, but at the very least millions of years old (4.5 billion, we now know). In reshaping the past, he also reshaped the idea of the future. And he provided the temporal context for the discovery by Charles Darwin of the principles of evolution. Indeed, as a young man Darwin took Lyell’s books with him during his voyage on the Beagle.
The previously unimaginable longevity of the past revealed by Lyell inspired symmetrical dreams of distant futures in which the progress of man might reach limitless heights. In the generation that followed Lyell, Jules Verne conjured a future with rockets landing on the moon, a submarine traversing the oceans’ depths, and men traveling to the center of the Earth.
The exuberant optimism of the nineteenth century was dampened for many by the excesses of the Second Industrial Revolution, but was revivified during the first decade of the twentieth century with the birth of a political movement based on the belief that progress required governmental policy interventions and social changes in order to ameliorate the problems accompanying industrialization and consolidate its obvious benefits. As the scientific and technological revolution brought some of the visions conjured by Verne and his successors into reality, optimism about the future gained further momentum.
But the balance of the twentieth century brought two world wars and the murder of millions by totalitarian dictators of the left and right to serve their own twisted conceptions of progress—-and our view of the future began to change. The malignant nightmare of the Thousand Year Reich, the Holocaust, and the cruelties of Stalin and Mao came to be emblematic of the potential for emergent evil emanating from the use of any means, however horrific, in an effort to impose grand designs for the future of humanity that conformed to the visions of twisted men with too much power.
In the aftermath of World War II, the lingering dismay at the way totalitarian governments had used the wondrous new communications technologies of radio and film to persuade millions to suppress their better instincts and conform their lives to an evil design—-coupled with the deep emotional and spiritual impact of the atomic sword of Damocles that the emergence of the nuclear arms race left hanging over civilization—-reawakened concerns that new inventions might be double--edged. The uneasiness in the popular mind that powerful technologies—-whatever their benefits—-might also magnify the innate human vulnerability to hubris deepened for many the loss of their confidence that progress was a reliable guiding star.
The prophecies of Jules Verne were replaced by those of Aldous Huxley, George Orwell, and H. G. Wells, and popular movies about destructive monsters from the ancient past—awakened by nuclear testing or dangerous creatures modified by genetic engineering gone awry—and malevolent robots from the distant future or distant planets, all seemingly bent on ravaging humanity’s future.
And now many wonder: who are we? Aristotle wrote that the end of a thing defines its essential nature. If we are forced to contemplate the possibility that we might become the architects of our own demise as a civilization, then there are necessarily implications for how we answer the question: what is our essential nature as a species? As a scientist once reframed the question: is the combination of an opposable thumb and a neocortex viable as a sustainable form of life on Earth?
Our natural and healthy preference for optimism about the future is difficult to reconcile with the gnawing concerns expressed by many that all is not well, and that left to its own devices the future may be unfolding in ways that threaten some of the human values we most cherish. The future, in other words, now casts a shadow upon the present. It may be comforting, but of little practical use, to say, “I am an optimist!” Optimism is a form of prayer. Prayer does, in my personal view, have genuine spiritual power. But I also believe, in the words of the old African saying, “When you pray, move your feet.” Prayer without action, like optimism without engagement, is passive aggression toward the future.
Even those who understand the different dangers we are facing and are committed to taking action often feel stymied by a sense of powerlessness. On the issue of climate, for example, they change their own behaviors and habits, reduce their impact on the environment, speak out and vote, but still feel they are having precious little impact, because the powerful momentum of the global machine we have built to give us progress seems almost independent of human control. Where are the levers to pull, the buttons to push? Is there a steering mechanism? Do our hands have enough strength to operate the controls?
More than a decade before writing Faust, Goethe wrote his well--known poem “The Sorcerer’s Apprentice” about a young trainee who, left to his own devices, dared to use one of his master’s magic spells in order to bring to life the broom he was supposed to be using to clean the workshop. But once animated, the broom could not be stopped. Growing desperate to halt the broom’s increasing frenzy of activity, the apprentice split the broom with an axe—-which caused it to self--replicate, with each half growing into another new animated broom. Only when the master returned was the process brought back under control.
Democratic Capitalism and Its Discontents
The idea of making truly meaningful collective decisions in democracy that are aimed at steering the global machinery we have set in motion is naïve, even silly, according to those who have long since placed their faith in the future not in human hands, but in the invisible hand of the marketplace. As more of the power to make decisions about the future flows from political systems to markets, and as ever more powerful technologies magnify the strength of the invisible hand, the muscles of self-governance have atrophied.
That is actually a welcome outcome for some who have found ways to accumulate great fortunes from the unrestrained operations of this global machinery. Indeed, many of them have used their wealth to reinforce the idea that self-governance is futile at best and, when it works at all, leads to dangerous meddling that interferes with both markets and technological determinism. The ideological condominium formed in the alliance between capitalism and representative democracy that has been so fruitful in expanding the potential for freedom, peace, and prosperity has been split asunder by the encroachment of concentrated wealth from the market sphere into the democracy sphere.
Though markets have no peer in collecting, processing, and utilizing massive flows of information to allocate resources and balance supply with demand, the information in markets is of a particularly granular variety. It is devoid of opinion, character, personality, feeling, love, or faith. It’s just numbers. Democracy, on the other hand, when it operates in a healthy pattern, produces from the interactions of people with different perspectives, predispositions, and life experiences emergent wisdom and creativity that is on a completely different plane. It carries dreams and hopes for the future. By tolerating the routine use of wealth to distort, degrade, and corrupt the process of democracy, we are depriving ourselves of the opportunity to use the “last best hope” to find a sustainable path for humanity through the most disruptive and chaotic changes civilization has ever confronted.
In the United States, many have cheered the withering of self-governance and have celebrated the notion that we should no longer even try to control our own destiny through democratic decision making. Some have recommended, only half in jest, that government should be diminished to the point where it can be “drowned in the bathtub.” They have enlisted politicians in the effort to paralyze the ability of government to serve any interests other than those of the global machine, recruited a fifth column in the Fourth Estate, and hired legions of lobbyists to block any collective decisions about the future that serve the public interest. They even seem to sincerely believe, as many have often written, that there is no such thing as “the public interest.”
The new self-organized pattern of the Congress serves the special interests that are providing most of the campaign money with which candidates - incumbents and challengers alike - purchase television commercials. It no longer responds to any but the most emotional concerns of the American people. Its members are still “representatives,” but the vast majority of them now represent the people and corporations who donate money, not the people who actually vote in their congressional districts.
The world’s need for intelligent, clear, values-based leadership from the United States is greater now than ever before - and the absence of any suitable alternative is clearer now than ever before. Unfortunately, the decline of U.S. democracy has degraded its capacity for clear collective thinking, led to a series of remarkably poor policy decisions on crucially significant issues, and left the global community rudderless as it faces the necessity of responding intelligently and quickly to the implications of the six emergent changes described in this book. The restoration of U.S. democracy, or the emergence of leadership elsewhere in the world, is essential to understanding and responding to these changes in order to shape the future.
One of the six drivers of change described in this book - the emergence of a digital network connecting the thoughts and feelings of most people in every country of the world - offers the greatest source of hope that the healthy functioning of democratic deliberation and collective decision making can be restored in time to reclaim humanity’s capacity to reason together and chart a safe course into the future.
Capitalism - if reformed and made sustainable - can serve the world better than any other economic system in making the difficult but necessary changes to the relationship between the human enterprise and the ecological and biological systems of the Earth. Together, sustainable capitalism and healthy democratic decision making can empower us to save the future. So we have to think clearly about how both of these essential tools can be repaired and reformed.
The structure of these decision-making systems and the ways in which we measure progress - or the lack thereof - toward the goals we decide are important have a profound influence on the future we actually create. By making economic choices in favor of “growth,” it matters a lot which definition of growth we use. If the impact of pollution is systematically removed from the measurement of what we call “progress,” then we start to ignore it and should not be surprised when much of our progress is accompanied by lots of pollution.
If the systems we use for recognizing and measuring profit are based on a narrow definition - for example, quarterly projections of earnings per share, or quarterly unemployment statistics that don’t include people who have given up looking for work, those who have been forced to take large pay cuts in order to continue working, or those who are flipping hamburgers instead of using higher--value skills hard won with education or prior experience - then what we are seeing is an imperfect and partial representation of a much larger reality. When we become accustomed to making important choices about the future on the basis of distorted and misleading information, the results of those decisions are more likely to fall short of our expectations.
Psychologists and neuroscientists have studied a phenomenon called selective attention—-a tendency on the part of people who are so determined to focus intensely on particular images that they become oblivious to other images that are present in the field of vision.
We select the things to which we pay attention not only by curiosity, preference, and habit, but also through our selection of the observational tools, technologies, and systems we rely on in making choices. And these tools implicitly mark some things as significant and obscure others to the point that we completely ignore them. In other words, the tools we use can have their own selective attention distortions.
For example, the system of economic value measurement known as gross domestic product, or GDP, includes some values and arbitrarily excludes others. So when we use GDP as a lens through which to observe economic activity, we pay attention to that which is measured and tend to become oblivious to those things that are not measured at all. British mathematician and philosopher Alfred North Whitehead called the obsession with measurements “the fallacy of misplaced concreteness.”
Here is a metaphor to illustrate the point: the electromagnetic spectrum is often portrayed as a long thin horizontal rectangle divided into differently colored segments that represent the different wavelengths of electromagnetic energy - usually ranging from very low frequency wavelengths like those used for radio on the left, extending through microwaves, infrared, ultraviolet, X-rays, and the like, to extreme high frequency gamma radiation at the right end of the rectangle.
Somewhere near the middle of this rectangle is a very thin section representing visible light - which is, of course, the only part of the entire spectrum that can be seen with the human eye. But since the human eye is normally the only “instrument” with which most of us attempt to “see” the world around us, we are naturally oblivious to all of the information contained in the 99.9 percent of the spectrum that is invisible to us.
By supplementing our natural vision with instruments capable of “seeing” the rest of the spectrum, however, we are able to enhance our understanding of the world around us by collecting and interpreting much more information. During the eight years I worked in the White House, I started every day, six days a week, with a lengthy briefing from the intelligence community on all the issues affecting national security and vital U.S. interests, and it routinely contained information collected from almost all parts of the electromagnetic spectrum. It was, as a result, a much more complete and accurate picture of a very complex reality.
One of the current realities in the business world that has been most surprising to me is the near consensus that markets are “short on long and long on short” - that is, there is an unhealthy focus on very short--term goals, to the exclusion of long-term goals. If the incentives routinely provided for business leaders - and political leaders - are focused on extremely short-term horizons, then no one should be surprised if the decisions they make in pursuit of the rewards to be gained are also focused on the short term - at the expense of any consideration of the future. Compensation and incentive structures reinforce these biases and penalize most CEOs and businesses that dare to focus on more sustainable longer-term strategies. “Short-termism” has long since become a frequently used buzzword in business circles. In both business and politics, short-term decision making is dominant.
“Quarterly capitalism” is a phrase some use to describe the prevailing practice of managing businesses from one three-month period to the next, and focusing budgets and strategies on the constant effort to ensure that each quarter’s earnings per share report never fails to meet projections or the market’s expectations. When investors and CEOs focus on a definition of “growth” that excludes the health and well--being of the communities where businesses are located, the health of the employees who do most of the work, and the impact of the businesses’ operations on the environment, they are tacitly choosing to ignore material facts with the potential to make real growth unsustainable.
Similarly, the dominance of money in modern politics - particularly in the United States - has now led to what might be described as “quarterly democracy.” Every ninety days, incumbent officeholders running for reelection and challengers in political contests are required to publicly report their fundraising totals for the previous ninety days. At the end of each of these quarters, there is a flurry of fundraising events, email solicitations, and fundraising telephone calls to maximize the amount that can be reported - much as a puffer fish increases its perceived size in the presence of another puffer fish encroaching on its territory.
Our evolutionary heritage has made us vulnerable to numerous stimuli that trigger short-term thinking. Though we also have the capacity for long-term thinking, of course, it requires effort, and neuroscientists tell us that distractions, stress, and fear easily disrupt the processes by which we focus on the longer term. When elected officials are under constant systemic stress to focus intently on short-term horizons, the future gets short shrift.
This is particularly dangerous during a period of rapid change. Some of the trends now under way are so well documented by observations in the past that projections of those same trends into the future can be made with a very high degree of confidence. The rate of advancement in computer chips, to pick a well-known example, is understood more than well enough to justify predictions that computer chips will continue to advance rapidly in the future.
The speedy drop in the cost of sequencing DNA has occurred for reasons that are understood more than well enough to justify predictions that this trend too will continue to shape our future. The accumulation of greenhouse gases in the past and the rise in global temperatures they have caused is also understood more than well enough to justify predictions of what will happen to global temperatures if we continue to increase emissions at the same rate in the future - and what the consequences of much higher global temperatures would be.
Other changes, however, burst upon the world seemingly fully formed: a brand-new pattern that represents a sudden shift from an older pattern that persisted for as far back in the past as humans can recall. In our own lives, we are accustomed to gradual, linear change. But sometimes the potential for change builds up without being visibly manifested until the inchoate pressure for change reaches a critical mass powerful enough to break through whatever systemic barriers have held the change back. Then suddenly one pattern gives way to another that is entirely new. This “emergence” of systemic change is often difficult to predict, but does occur frequently both in nature and in complex systems designed by human beings.
Many who were once fascinated and excited about the possibilities of the future are now focused solely on the implications of the future’s potential for the business, political, and security strategies of the present. As the Scientific Revolution accelerated in the last decades of the twentieth century, corporate planners and military strategists began to devote considerably more attention to the study of alternative futures, motivated by a concern that the potency of new scientific and technological discoveries could threaten the strategic interests - or even survival - of business models and the balance of power among nations.
What is our present conception of the future? How does our image of the future affect the choices we are making in the present? Do we still believe that we have the power to shape our collective future on Earth and choose from among the alternative futures one that preserves our deepest values and makes life better than it is in the present? Or do we have our own crisis of confidence in humanity’s future?
If the spectrum of past, present, and future were displayed as a long thin rectangle similar to that used to portray the electromagnetic spectrum, the birth of Planet Earth 4.5 billion years ago would be at the far left end. Moving to the right, we would see the emergence of life 3.8 billion years ago, the appearance of multicellular life 2.8 billion years ago, the appearance of the first plant life on land 475 million years ago, the first vertebrates more than 400 million years ago, and the first primates 65 million years ago. Then, moving all the way to the right end of the rectangle, the death of the sun would appear 7.5 billion years from now.
The narrow slice of time to the left of the midpoint in this spectrum - the one that represents the history of the human species - is an even narrower slice of the spectrum of time than is visible light of the electromagnetic spectrum. The thoughts we devote to these vast stretches of time in the past and future are often fleeting at best.
There are ample reasons for optimism about the future. For the present, war seems to be declining. Global poverty is declining.
Some fearsome diseases have been conquered and others are being held at bay. Lifespans are lengthening. Standards of living and average incomes - at least on a global basis - are improving. Knowledge and literacy are spreading. The tools and technologies we are developing - including Internet-based communication - are growing in power and efficacy. Our general understanding of our world, indeed, our universe (or multiverse!) has been growing exponentially. There have been periods in the past when limits to our growth and success as a species appeared to threaten our future, only to be transcended by new advances - the Green Revolution of the second half of the twentieth century, for example.
So the positive and negative sets of trends are occurring simultaneously. The fact that some are welcome and others are not has an effect on our perception of them. The unwelcome trends are sometimes ignored, at least in part because they are unpleasant to think about. Any uncertainty about them that can be conjured to justify inaction is often seized upon with enthusiasm, while new hard evidence establishing their reality is often resisted with even stronger denial of the reality the evidence supports.
Just as naïve optimism can amount to self-deception, so too can a predisposition to pessimism blind us to bases for legitimate hope that we can find a path that leads around and through the dangers that lie ahead. Indeed, I am an optimist - though my optimism is predicated on the hope that we will find ways to see and think clearly about the obvious trends that are even now gaining momentum, that we will reason together and attend to the dangerous distortions in our present ways of describing and measuring the powerful changes that are now under way, that we will actively choose to preserve human values and protect them, not least against the mechanistic and destructive consequences of our baser instincts that are now magnified by technologies more powerful than any that those in previous generations, even Jules Verne, could have imagined. I have tried my best to describe what I believe the evidence shows is more likely than not to present us with important choices that we must consciously make together. I do so not out of fear, but because I believe in the future.
Support Truthout's mission. "The Future: Six Drivers of Global Change" is yours with a minimum donation to Truthout of $40 (which includes shipping and handling) or a monthly donation of $15. Click here.
Excerpted from The Future by Al Gore. Copyright © 2013 by Al Gore. Excerpted by permission of Random House, a division of Random House, Inc. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Iranian online video game wins at German intl. game contest
Iranian online video game Asmandez II (Sky Fortress II) has won at the German Massively Multiplayer Online Game (MMO) 2013 contest.
Asmandez II beat out four other competitors and picked up the Best Indie Game Award of the eighth edition of the contest.
Asmandez II is a browser-based science-fiction strategy MMOG in space opera genre. The game uses the latest web technologies of HTML5 and CSS3 and can be played in both English and Persian languages.
The online strategy game Asmandez II is sequel of the Iranian first online video game Asmandez I that was released in July, 2010, and quickly gathered over 100 000 online users.
Both versions of Asmandez are set in future when inhabitants of the Solar System are engaged in a war with robots and try to go to another system called Limbas.
The sequel which has been made with more advanced narrative strategies and high artistic techniques is available for playing on phone, tablet, or PC.
Produced by Iran's National Foundation for Computer Games, Asmandez II is capable of supporting over 5,000 users at the same time.
The science fiction games were developed by a group of young Iranian experts in an effort to promote computer science in the country.
Iran had earlier released its first three-dimensional video game titled the Age of Heroes in 2009, which was designed based on the stories narrated in the Persian epic poet Ferdowsi's magnum opus, The Shahnameh.
“Some 10 million people use computer games in Iran, only 100 of which can design and develop video games,” Head of Iran's National Foundation for Computer Games Behrouz Minaie had earlier said.
A Massively Multiplayer Online Game (MMO or MMOG) is a multiplayer video game which is capable of supporting hundreds or thousands of players simultaneously on the Internet.
Sheriff Elizabeth Warren
Posted on Feb 17, 2013
John Darkow, Cagle Cartoons, Columbia Daily Tribune, Missouri
Click to see more Truthdig Cartoons
Get truth delivered to
your inbox every week.