BlindChat, an open-source and privacy-first alternative to ChatGPT, was just launched by MithrilSecurity. BlindChat is an open-source AI initiative aiming to create the world’s first conversational AI that operates entirely within a web browser without any third-party access. Today’s prevalent everyday AI solutions typically include sharing user data with AI service providers in exchange for AI model usage. Users risk having their data stolen if they let this happen. Since data is a valuable resource for enhancing LLMs, several approaches implicitly adjust users’ data to train the model better. Users run the danger of having LLMs memorize private information in this way.
By performing local inference or employing secure, isolated environments called secure enclaves, BlindChat ensures that users’ data is kept private at all times and that they retain complete control over it.
BlindChat has two main audiences in mind:
- Consumers: Offer new, more secure options that prioritize user privacy. Most consumers nowadays surrender data to AI services, yet privacy settings often need to be clarified or nonexistent.
- The BlindChat team has put in extensive work to ensure the platform’s simplicity in configuration and deployment for the benefit of developers so that they may more readily provide privacy-by-design Conversational AI.
MithrilSecurity changed the program to allow the browser to do functions normally performed by the server. Therefore, the AI service provider is not included in the trust model, and privacy is thus protected.
Transparent and secure AI is achieved by moving the functionality from the server to the browser on the user’s end. This protects end users’ personal information and grants them agency over their data. For instance, transformers allow inference to be performed locally.JavaScript, with the added convenience of having chats saved in the user’s browser history. As a result, the AI service’s administrators can’t see any of the user’s information—hence the service’s moniker, “BlindChat.”
Where remote enclave mode is activated, data is only transmitted to the server. This setting deploys the server inside a verified and secure container known as an enclave, which provides full perimeter defense and blocks access from the outside world. Nobody can access user information, not even the enclave’s AI provider administrators.
MithrilSecurity has two different privacy options available to users:
- The model is downloaded locally to the user’s browser in the on-device setting, and inference is handled locally.
- Due to the available bandwidth and processing power limitations, this mode is best suited for less complex models.
When using Zero-trust AI APIs, information is transmitted to an enclave, a safe location where the model is stored, so that it may be inferred remotely. These settings offer comprehensive safety by means of strong isolation and verification. No AI service provider ever has unencrypted access to their users’ data.
The project consists of three main parts:
- User Interface: The face a user sees when interacting with Chat. There’s a chat window in there, and eventually, there’ll be widgets and plugins for things like document loading and voice control.
- Developers have complete control over which private LLM is used to process user requests. The current solutions are local models or remote enclaves to provide transparent and confidential inference.
- The type of storage used to keep data like chat logs and, in the future, RAG embeddings is configurable by developers.
MithrilSecurity currently only allows LaMini-Flan-T5 inference. Once the 370M is out, they intend to integrate Microsoft phi-1.5 to boost performance. LlamaIndex-TS integration on the client side is also under development, so RAG can be used locally in the browser to query sensitive documents.
Check out the GitHub and Demo. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.