The area of low code IDEs has a lot of competitors (e.g. Mendix, luna-lang.org). They usually have a graph view, where you can construct a program from basic building blocks. But it always feels like a corset, that sooner of later has to be adjusted with "code behind". We on the other hand are able to translate graphs to code and existing code into graphs that are easy comprehendable, while our competitors can only go from graph to code. We use thoudands of open source libraries in the side effect avoiding language Haskell, translate it into Lambda Calculus and overlay it to a big graph. This has a lot of resemblance to neural networks. But it makes neural networks look primitive in comparison because it is well typed and contains thousands of pieces from pure functional algorithms that can never be learned with todays neural networks. Could it be that enormous investments of billions of dollars in AI does not guarantee victory and the small academic Haskell community is right?