Navigating the Ethical Implications of the Age of Invisible Machines: A Practical Guide to Hyperautomated Systems

Age of Invisible Machines – An Unpractical Guide to Hyperautomated Systems

For those who read this column regularly, you know that I am a spokesman for authors, academics, or consultants, who pander to management, rather than teach them. Age of Invisible Machines is a sad book.

Above, we mentioned the second and bigger issue. Since long before Alfred Nobel invented, inventors have had a tendency to ignore the consequences of their creations. Scientists often use the same excuse, claiming that they don’t have the authority to determine the impact of their inventions on society or the way it is used. They are just inventing and discovering things. It’s true that theoretical science is a good thing, but it’s time to stop trying to absolve technologists who work on applications with direct social impact.

The ethical AI movement is a natural extension of the regular movements within society that seek to understand and implement change from the very beginning. Good programmers consider system issues during the design phase. It is too late to build an effective system if you wait until the debugging phase. Artificial intelligence is going to have a major impact on society. Artificial intelligence will change the way society views work, and who is allowed to work. The overvaluation of stocks is a result of the promises of solutions and the fact that most people do not understand what they mean. Only a few have a true understanding.

The society is now in a new Gilded Age. Over the past forty years, regulations and protections have been weakened. The new industrial revolution poses even greater challenges and dangers. No one should talk about systems that have such a huge potential impact on the society without mentioning those impacts. This book is outdated in its approach, which ignores society and calls for major social disruptions. This means that I cannot recommend the book.