I’ve been asked to see if I can simplify and summarise an article in the Guardian about the bombing of Shajareh Tayyebeh primary school in Iran by the US military
The article is both very good and also very bad. As far as I can tell, it tells a detailed and accurate story. Sadly it is absolutely crammed with details and acronyms that are often superfluous to the message. This despite it mocking the constantly changing jargon.
One example is that it prominently refers several times to the acronym LLM without explaining it. The only real reason that LLM is at all relevant is that some commentators thought that the missiles were targeted by a Chatbot. Chatbots use Large Language Models to make sense of the conversations.
As well as cramming too much history and historic factual detail in, the article also cherry picks the detail to take aim at particular targets such as the US and Palantir. The people in those, and other, organisations may well warrant criticism but not as a way of boasting about other’s superiority.
I think the gist of the situation is that there are systems (a combination of technology and people) that provide information to inform decisions about targeting in a combat situation. There are also systems that allow greater accuracy and speed in the delivery of the actual munitions. The technology used in delivery can now be faster, more accurate and can be controlled remotely, either directly by an operator or automatically by a technological system. Some of these systems now include some AI to analyse increasingly greater amounts of data including satellite imagery, mapping and coordinate details, weather and technological capability.
The development argument is that politicians and military high-ups make faulty decisions about strategic developments, often based on ignorance and ego. Even if the actual developers show a conscience about what is proposed, they will simply be by-passed and replaced. I can tell you from personal experience that it takes a lot of resilience, mental agility and a persuasive personality to re-direct strategies in a more valuable direction. The other area of failure in the development of all technologies is that each level of people involved loses interest when it comes to the testing the system against the objectives and also re-examining the objectives to ensure they still make sense.
The usage argument is on several levels. The first is that the more capable a technology appears to be, the less attention those operating it pay to whether it is working properly. The second is that the further away that the operators are from the effects of the technology, i.e. dead bodies and destruction, then the less likely they are to question what they are doing. This distance is even greater for the original strategists and many members of the public. The main crux of the argument is that this situation is not new and people of all sorts fail to pick up the nuances of how it all works and therefore who or what is to blame.
Finally on AI, I have written elsewhere about what it can and can’t do but the main message is that you can’t rely on it in all situations. It is trying to mimic human intelligence capacities while adding much better data processing capability. Donald Trump apparently has some intelligence but, personally, I wouldn’t rely on anything he said or any decision he made.
I hope that is helpful. On a personal level such situations always remind me of the cartoon film ‘Up’. There are some scary dogs and as a pack when one of them thinks they see a squirrel, they all turn their heads in unison and growl ‘Squirrel’. The other thing about those dogs is that, like the ‘wizard’ in the Wizard of Oz, they are really also babies wanting to be cuddled and fussed.
