I’ve been thinking about this for a couple of months now. Wouldn’t it be great to have, say claude, help you with programming on a station. Building graphics, scripts, mass linking points, and so on.
I recently decided to build one. I’m still in the early testing phase. I’m basically testing code in program blocks to make sure there is enough functionality available to do all the things i mentioned above and more.
This would mean providing the AI with enough context that it knows what its working with. Access to documentation for reference as needed and maybe a codebase as a reference.
So far, I have an chatbot that works via an api (basically chatGPT in niagara) and its able to read the config to determine what types of networks and devices are present in a station. Its also able to reference documentation related to active modules. It can’t do anything yet, I’m hoping to give it a more robust “context window” before i have the ai making changes to the station. I also want to make sure that the information is condensed enough so that the user isn’t blowing through their tokens.
I also have a few security concerns as well as ideas to help mitigate those concerns. i.e. how do you prevent the agent from accidentally deleting your station or files on a station. Niagara is a pretty closed off ecosystem so most issues are likely to emerge from the behavior of the AI.
I’m still in early development stage. My first goal is to make sure that the AI has enough context to know what kind of environment it is operating in and enough information to make informed changes.
I’m probably month away from a working prototype. On the bright side, the station is a big XML file and LLM’s are great at working with markdown files.
I’m currently only testing code via the Program blocks. Waiting to get hands on N5 to develop a proper module.
What are your thoughts on this or Ai assistants in general? Is anyone else working on something similar.