Niagara AI assistant

I’ve been thinking about this for a couple of months now. Wouldn’t it be great to have, say claude, help you with programming on a station. Building graphics, scripts, mass linking points, and so on.

I recently decided to build one. I’m still in the early testing phase. I’m basically testing code in program blocks to make sure there is enough functionality available to do all the things i mentioned above and more.

This would mean providing the AI with enough context that it knows what its working with. Access to documentation for reference as needed and maybe a codebase as a reference.

So far, I have an chatbot that works via an api (basically chatGPT in niagara) and its able to read the config to determine what types of networks and devices are present in a station. Its also able to reference documentation related to active modules. It can’t do anything yet, I’m hoping to give it a more robust “context window” before i have the ai making changes to the station. I also want to make sure that the information is condensed enough so that the user isn’t blowing through their tokens.

I also have a few security concerns as well as ideas to help mitigate those concerns. i.e. how do you prevent the agent from accidentally deleting your station or files on a station. Niagara is a pretty closed off ecosystem so most issues are likely to emerge from the behavior of the AI.

I’m still in early development stage. My first goal is to make sure that the AI has enough context to know what kind of environment it is operating in and enough information to make informed changes.

I’m probably month away from a working prototype. On the bright side, the station is a big XML file and LLM’s are great at working with markdown files.

I’m currently only testing code via the Program blocks. Waiting to get hands on N5 to develop a proper module.

What are your thoughts on this or Ai assistants in general? Is anyone else working on something similar.

Sounds like a fun idea. I’m all for AI, when used responsibly. Would I let an LLM agent manipulate the bog file on a running station? Hell no! But on an offline station under development? Sure, that wouldn’t scare me at all.

1 Like

One of the goals is to make development of new stations faster.

Having used Niagara for last 4 years, I have a decent grasp of what i would want the assistant to touch and what i wouldn’t. One example of this would be user services. Instead of letting the agent read and edit the entire user list, it would be better to write a custom function through which it can add modify and delete users.

It also would only have filtered access to the bog file. There are a few reasons for that but the biggest is that it’s just not feasible to feel the entire bog file to the agent.

I have a few ideas for managing disaster scenarios too. Things like quick station snapshots to revert back to. The agent asking for conformation before running the task. Automatic backups at certain intervals. and so on.

Another thing i want would be to have the option to undo the changes that the agent made. But i have no idea how to go about that.

Of course, this is just a side project that I’ll be working on in my free time and some of the things i want are pretty advanced and are just ideas. I imagine implementing some of these would be a complex task. My biggest issue is that i don’t have access to workbench software outside of my work computer.

I’ll publish the code on GitHub once i have a somewhat working product so others can contribute to the project if they want to.

I think that we’ll have personal agents in Niagara one way or another. so why not get on top of it first.

There is a google gemini mod here “https://ddc-talk.com/t/google-ai-agent-programobject/2530/3”