

ohh you’re great, I definitely won’t forget !


ohh you’re great, I definitely won’t forget !


ohh okay good to know, thanks for the advice !


bro, there’s one container for the app, and one container for postgres, the all doesn’t surpass 500 Mo of RAM… where do you except to host the database ? Sorry If I sound a bit rude but you can always edit the .yaml manually to only launch the app, the app will use SQLite then, or configure an external postgres database via env :)


Thank you, I am not aware of the risk with GitHub, can you tell me more?


Thanks 🙌, appreciate it !


This is honestly one of the kindest messages I’ve received. THANKS ❤.
I’m just tired of seeing every project full of soulless AI slop for fame.
I try to build things with intention, even if it’s not the “trend”, I prefer to stay aligned with what suits me :)


Yo, 0.3.3 is out.
You can now add elements on tablet by long-pressing on an empty space, it opens the context menu. Demo’s already updated if you want to test it there.


Yeah, that shouldn’t be hard to add.
If you don’t mind, open an issue just so I can track it properly. I’m already working on touch support anyway.
The plan is basically a long press on the background to open the menu and create blocks. That should feel natural on tablet without adding extra buttons everywhere.


You’re not missing anything 🙂
Ideon isn’t fully tablet-compatible yet, and mobile portrait is even more limited at the moment.
Touch support is something I’ve been actively working on for a while. Some interactions, like right-click equivalents, need proper tactile handling, and that requires rethinking parts of the UX rather than just patching it.
A dedicated mobile mode for smaller screens is also planned. The challenge is making keyboard/mouse and touch experiences coexist cleanly without breaking workflows on either side, so it’s taking time to do it properly.
Sorry for the frustration on your side. I completely understand it. It’s definitely on my roadmap, even if it will require a bit more work before it feels right…


Thanks so much! Really happy to hear that, it means a lot ❤️. I’m obviously looking forward to adding more block types and integrations, and ideas like Nextcloud or custom blocks are definitely on the roadmap.


Thanks! I hope it ends up being really useful for organizing your ideas and projects :D


Thank you so much! that really means a lot :))
Thanks a lot. Yes, it’s not the most adequate solution yet, that’s exactly why I’m reaching out to communities and forums, to get feedback and improve it every day so it can eventually be useful to more people.
Also, now that I’ve re-read this (I didnt understand what downvotes mean at first): why does a new project that doesn’t compete with big companies deserve downvotes? I’m just trying to meet tech people and talk about it, that’s all. It doesn’t need money, it doesn’t hurt anyone, and I’m not posting bullshit.
If it doesn’t solve a problem for you yet, that’s fine, it will get better over time. I genuinely want to understand what made you comment like this. And since you’re a moderator, respect btw, but why push people toward hating on it? What’s the goal here, should I delete the repo?
nobody here asked for technical details, so I didn’t respond with technical stuff. but now that you ask, I can respond:
the rebuild occurs periodically. you set the period (in seconds) in the .env. a container named orchestrator stops and rebuilds vault containers by deleting every file that is not in the database and therefore not encrypted (like payloads). for event-based triggers, I haven’t implemented specific ones yet, but I plan to.
session tokens are stored encrypted in the database, so when a vault container is rebuilt, sessions remain intact thanks to postgres.
same as 2: auth tokens are stored in the database and are never lost, even when the whole stack is rebuilt.
yes, but not everything. since one container (the orchestrator) needs access to the host’s docker socket, I don’t mount the socket directly. instead, I use a separate container with an allowlist to prevent the orchestrator from shutting down services like postgres. this container is authenticated with a token. I do rotate this token, and it is derived from a secret_key stored in the .env, regenerated each time using argon2id with random parameters. and i also use docker network to isolate containers that doesn’t need to communicate between each other, like vault containers and the “docker socket guardian” container.
every item has its own blob: one blob per file. for folders, I use a hierarchical tree in the database. each file has a parent id pointing to its folder, and files at the root have no parent id.
can the app tune storage requirements depending on S3 configuration? not yet, S3 integration is a new feature, but I’ve added your idea to my personal roadmap. thanks.
and I understand perfectly why you’re asking this. No hate at all, I like feedback like this because it helps me improve.
Hey there! That’s a great question.
So, when you’re just using something by yourself on your own computer, E2EE doesn’t always make a huge difference. You really start to see its value when you bring in outside storage, like S3, or when you have a bunch of people using it.
Think about a company running its own app. If someone uploads sensitive files and doesn’t want the system administrator or the tech team to read them, E2EE comes to the rescue. The files get scrambled before they even leave the user’s device. So, even if the server is in-house, the admin only sees encrypted stuff.
It’s basically about separating who operates the infrastructure from who can actually read the data, which lets people use shared or external storage and knowing their stuff is private.
Here’s a simple way to look at it: it’s all about persistence. If someone sneaks a backdoor onto a server or inside a container, that backdoor usually needs the environment to stay put.
But with containers that are always changing, that persistence gets cut off. We log the bad stuff, the old container gets shut down, and a brand new one pops up. Your service keeps running smoothly for folks, but whatever the attacker put there vanishes with the old container.
It’s not about saying hacks won’t ever happen but making it way tougher for those hacks to stick around for long :)
Nah, not really. I mostly use AI for the annoying stuff like GitHub workflows, install scripts, and boilerplate code, not the actual backend or frontend code.
Oh, and since I’m French, I also use it to clean up my notes into good English for the README (in response to Jokulhlaups). It’s just a handy tool to speed things up, not some magic button that builds everything with one command. If you look at the commit history, you can see the project grew over time. Definitely didn’t just pop out of a single prompt, haha.
There is already a non-piped docker-compose setup. The installer just downloads the compose file and env.example, and you can also get them manually from GitHub.
You don’t need to set APP_PORT. If it’s unset, the app falls back to the PORT var provided by Portainer. Just make sure APP_URL exactly matches the root path you’re using behind Nginx.
I know from a friend his deployment running fine on Portainer, so it should work with a standard setup.