• 3 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2024

help-circle
  • so now youre saying AI psychosis is a serious thing we dont understand… but you seem to be convinced that youre qualified to diagnose it?

    unlike many others responding to this threat. you’ve seen my work and even linked a omprehensive post of how my project works. your pushback here doesnt contain any substance. you can call AI psychosis so you can avoid giving the project actual attention.

    ultimately i dont think my project is interesting to you. if im seen as an annoying lemmy user/troll, i encrourage you to block me.



  • the only project relevent here is: https://positive-intentions.com/

    the parts i want open source are on github. my project wasnt always open source. i created this without AI agents. then i open sourced it thinking it would gain more trust with users… and it did, but a key observation is that there are folks like yourself that will never be satisfied. if open source code, docs and my communication isnt enough… i have no delusuion that identifying myself would benefit the project in any way… its simply a vector by which people will highlight why im not qualified to work on the project.

    critisism in cybersec is common and expected. my ideas should be challenged. but the code is right there. feel free to ignore any details you think might not be up to your quality standard. you linked my previsous post which is more technical about how my app works. you can ask for further clarity on those details… but your critisism on previous posts suggest to me, that you dont actually want clarity because you alrealy already have the references to find out more.

    the project is enjoyable for me. its why i still work on it. would it be wild for me to want to make money from it? im trying to be more transparent about my process. the post here highlight my AI usage and how im using it to create high-effort work. “high-effort” is hardly quantifyable, but i see many reponses are along the lines that “AI cant be trusted to do things perfectly”… as if i dont also agree to that. you linked my previsous post which i would hope made it clear that my AI prompt wasnt “create me a messaging app”.

    a key and worrying observation is that mentioning that i use AI is the only thing that makes a different in feedback about the project (as per the subject of this post). you can see that in my previous post was significantly better recieved compared to this current post. that is the project where im using AI… because duh! it is a game changer.

    the point im making on the OP still stands that people cant see past my project after i mention i used an AI. human effort has never been easy to quantify… the best you got is storypoints and thats hardly meaningful.


  • hi. thanks for taking a look. sorry for the delay in responding, i wanted the heat on this post to settle down a bit.

    i originally started with src, but then when it some to formal verification and proofs, i came to the conclusions that you cant simply point it to a single folder are various functions are better separated to make it easier to document.

    unlike the formal verification with tools like hax, formal proofs are loosely related to the code. there isnt a direct relation too the proverif files and the code itself. if i change the code, i should also adjust the proverif. i documented it on the website to help me keep track of the functionality.

    https://positive-intentions.com/docs/technical/signal-protocol-formal-verification/proverif https://www.reddit.com/r/cryptography/comments/1evdby4/comment/liwyn3o/

    regarding how the cryptography is loaded, im using module federation. the signal protocol is imported into the cryptography modules (so the app doesnt need to load the signal protocol project explicitly). that cryptography modules is itself loaded into the p2p-framework repository so that i can automate the handling of p2p authentication.

    that AI audit as critical as it is of my implementation is the best source of truth for my project. there is simply not going to be a third-party audit and so it is intended to be objective, but i think i signpost enough that its AI generated. i need to clean up the exclamation marks and emoji’s, but the information there should all be correct.

    there are indeed a lot of debug messages logged. its worth repeating the project is still a work in progress and far from finished., im sharing it now at this point because it seems like a reasonable state. i understand people can have high expectations around perfection,… this is not that kind of project. perfection would be a waste of my time at this stage in the project.

    the CSP headers there are all deliberate to support things like gifs and simpleanalytics. ther could do with a bit of a clean up and taking ownership of things like fonts… its been on the todo-list for a while but i didnt proritise it. thanks for raising it… i’ll see about cleaning it up.

    the hax extraction is doing the abstraction to axioms and you right that the axions arent proven… this is something im actively investigating.

    thanks for your time and attention on the project. sorry if ive misled you to belive the project is more mature than it is… its is however a genuine attempt to create something safe and secure.


  • This generally seems to elude to my due-diligence. And if it’s low effort AI.

    It’s skepticism that has me put attention towards docs and various details.

    For example: I tried to get a security audit. I can’t get one for free, so I created one with AI. I’d like to be clear that I understand how my apps works and am able to articulate it to the best of my ability to AI to generate the security audit. I was exhausted from the experience of creating the audit with AI and it provides me with good information and advice. I stand by the feedback there isn’t it isn’t ready for production.

    In all my posts on all platforms Im sure to mention that it isn’t production-ready. (The same for the repos on GitHub)… But the general aim is to create something secure.




  • wow thats deep analysis and advice. i generally think i do well.

    i work on my project and cryptography because its interesting. i worked with cryptography long before AI… but like a “regular” developer on a sideproject, im going to use AI.

    i actively seek advice about the code in my project. i only share my work after ive put what i think is enough time and effort. it clearly isnt enough that the project “works”. in cybersec its important for code to be audited or reviewed, that fundamentally isnt an option on a project like mine unless i share something that is described as “AI-slop”. that feedback is fine. it’s important that its open source.

    it might not be fun for most, but this is something i work on because its enjoyable to me. its open source for transparency and critisism. i just want to take “AI” as a critisism, off the table because i cant quantify my involvement… which is a understandably wild thing to ask so i try to approach it with caution.

    i work on several project that interest me. many but not all are open source. they exist because i woke up some day and decided i wanted to create something.



  • i stated off with a version i created manually without AI. i know how to do this old-school (i tried). that was a different kind of slop.

    https://github.com/positive-intentions/chat

    i use AI in a way i think is appropriate. i check as much as i can myself too. i post online about details and questions. i can iterate with AI. im may naive to think i know how to inpect what is created, so i share it online. im not sharing slop. this is the best i can do. of couse there are countless points of improvement, but there are only so many hours in the day.

    youre sharing a valid opinion, but its difficult for me to quantify my efforts. im sure you dont think i just asked AI something basic (e.g. “verify this code is correct”).




  • Most I want is transparency.

    i agree with all youre saying. especially this which is why i entertain the idea of open source at all. what does transparency look like to you? code? documentation? open discussion? transparency is undermined when im trying to talk about something clearly complicated in order to seek feedback.

    cryptography code… Isn’t that a bit dangerous?

    in software dev we have thing like unit test (you already know that)… but when diving into cryptography we have formals proofs and verification we can use. it doesnt need AI to extract abstraction from the code implementation to run verification on. the tooking there is common practice and if we question if AI is doing it ptoperly we bring into question if the tooling used is good enough.

    • security audit
    • unit tests
    • formal proof
    • formal verification
    • documentation

    individually, they are all easily AI slop. but combined i hope it can serve as a starting point for a proper review. i dont mean a proper review from you either… im was seeking a review from orgs that specialise in such review.

    https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

    you make a lot of assumptions about how i code and what i understand about my project. enumerating what ive done and plan to do wouldnt do it any justice… but i will say this project is the result of a long-term effort. i created the project without AI originally. the idea is unique around client-managed cryptography (https://github.com/positive-intentions/chat)… ultimately it was clear that open-source is dead and so ive started introducing less transparency in the project as i introduce a close-source UI. i still keep the cryptography related modules open for transparency (whatever thats worth when people see that AI was involved).

    i wouldnt put my project out there if i didnt have faith in the implementation. i have actively seeked feedback and recieved good advice from which i iterated and improved. particularly concerning if im being banned from from communities for posting slop.


  • i vibecode a lot of things. my project is not inherently dangerous. people can use any software irresponsible. in my project and all my communications about it, i make it clear to users to use it cautiously and that its presented for testing and demo purpose. its mentioned in all of my post and i also have terms and condition within my projects the explain as much.

    nobody is being tricked into sharing sensitive information… in fact i made a proactive attempt to create something that doesnt need any personal information.

    dont tell me what i should and shouldnt be coding. i put time and effort into testing and verifying. this is the issue about mentioning AI is that it undermines all other efforts. its the low-hanging-fruit of critisism.




  • in the recent post that got me banned it was a copy of this post here:

    https://www.reddit.com/r/cybersecurityai/comments/1sxvrmu/browserbased_file_encryption_no_install_or/

    i make a point in all my posts to be clear with the caveats. im not promoting this to replace anything. details to find out more is there along with advice to not use it for sensitive data.

    for me messaging app, the caveats are similarly mentioned: https://positive-intentions.com/docs/technical/p2p-messaging-technical-breakdown

    my projects are reasearch and development projects which i make sure to make clear when i post about them. im fairly consistent with advice around cautious use… knowing full well that it will deter people. im proactively seeking critisism in order to improve it.

    It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing.

    bingo!.. youre framing as a negative understandable, but unless im mistaken, that the way its going to have to go. software development broadly speaking (for better or worse) is going to be AI generated. the tooling and methodologies have to keep up.

    horrible impacts it has on our world

    thats pretty vague, im sure it does some good too. AI is a tool. its easy to talk about how AI is impacting people badly. personally ive been unemployed for the past few months. its a horrible experience to go through countless interview thinking i aced it, but still come up with a rejection because the field has become so competative. but i dont blame AI on that. its a tool that i need to be learn how to use. perhaps others use it better than me.



  • completely understandable and so the proactive attempt to get a professional security audit so i can avoid asking to “trust me”.

    its completely understandable that you want to use something established. i cant offer more than open source and transparency in the implementation. if “trust” is behind the “paywall” of a security audit, its simply not an option without support.

    i used AI to generate an audit. it took several days of my time and effort to get it to where it is. i made a genuine attempt to be objective.

    in SWE we already have things in place for this like unit tests. if we dive further into cryptography we have things like formal proofs and verification.

    formal verification has tooling to help make sure things work and behave how it should. (without AI) it can take a look at the code and create abstractions that can be used for verification. if we question if AI can be used with such tooling, we start discussing if the tooling we use is good enough (its pretty widely used!).

    if the conversation cant move past that i used AI, then we’re not really having a discussion.