Interesting. The novelty here seems to be in the method of interacting with the LLM vs. innovating at the level of the AI workflow. My guess is each email client is essentially just given a custom prompt? Like others are saying, perhaps this can be popular with legacy business, where people don't want to go to a chat window?
This medium is exactly how you get people in fields/industries with slow tech adoption to use AI. Most jobs in legacy businesses live on email -- these people aren't going to want to learn a sleek/new AI chat tool. They're going to stay on their email client.
Emails an asynchronous interface is an interesting take on chat AI.
Giving out any email address on the main company name sounds risky. Somebody could register hostname@ or ceo@ ?
The FAQ should explain who the conversation is shared with. The generic "data is stored [encrypted]" is only about storage. Is an uploaded PDF sent to a third-party, possibly to train the their next version?
I always hated the streaming text responses in the chat interfaces, just seemed like a cynical attention-hack to continuously update the content instead of just returning the answer once it was done, so I think email makes a lot of sense, it especially makes slow-running local models a bit more bearable, instead of watching it type at 20 words per minute, just get back to me when you reach a stop token. Plus it can attach any relevant files or files it generates. I wrote a chatbot 8 years ago (chatscript dialog trees) that would gather parameters and run a SQL script, returning the resulting table as a .csv you could download. I wish I thought of doing it as an email character, would have integrated with existing processes way more easily than "login to this service we just spun up whenever you want to use it..."
Agreed, definitely some pros to using LLMs in email vs chat. Feels pretty natural to send an email and get a response shortly after with a notification, etc.
Fair points. We have the ability to restrict any address, so shouldn't be much of a concern on our end to shut down ceo@, etc.
We dont use any data for our own training. We send it to Gemini to process the prompt and encrypt it at rest. Having it encrypted at rest allows us to use it as context in your email thread as more replies come in.
That would be a nice way to tag it as context the bot should be aware of without eliciting a response from it, have it respond only when its a direct recipient.
Would be nice to have a natural language stack to tell a bit what I want it to do with each email. I spent way too many hours with Microsoft Power Automate just to say "when an email is from X and the subject line starts with RFQ\W+, download the attachment and save it as the matched name and add a row in a spreadsheet grabbing the tabular data from the email body" (pretty handy that I could insert data into spreadsheets without having to worry about auth, benefits of working within an ecosystem)
Interesting. The novelty here seems to be in the method of interacting with the LLM vs. innovating at the level of the AI workflow. My guess is each email client is essentially just given a custom prompt? Like others are saying, perhaps this can be popular with legacy business, where people don't want to go to a chat window?
This medium is exactly how you get people in fields/industries with slow tech adoption to use AI. Most jobs in legacy businesses live on email -- these people aren't going to want to learn a sleek/new AI chat tool. They're going to stay on their email client.
Agreed. At the very least, it solves the context switching problem between email and chat AI.
Emails an asynchronous interface is an interesting take on chat AI.
Giving out any email address on the main company name sounds risky. Somebody could register hostname@ or ceo@ ?
The FAQ should explain who the conversation is shared with. The generic "data is stored [encrypted]" is only about storage. Is an uploaded PDF sent to a third-party, possibly to train the their next version?
I always hated the streaming text responses in the chat interfaces, just seemed like a cynical attention-hack to continuously update the content instead of just returning the answer once it was done, so I think email makes a lot of sense, it especially makes slow-running local models a bit more bearable, instead of watching it type at 20 words per minute, just get back to me when you reach a stop token. Plus it can attach any relevant files or files it generates. I wrote a chatbot 8 years ago (chatscript dialog trees) that would gather parameters and run a SQL script, returning the resulting table as a .csv you could download. I wish I thought of doing it as an email character, would have integrated with existing processes way more easily than "login to this service we just spun up whenever you want to use it..."
Agreed, definitely some pros to using LLMs in email vs chat. Feels pretty natural to send an email and get a response shortly after with a notification, etc.
Fair points. We have the ability to restrict any address, so shouldn't be much of a concern on our end to shut down ceo@, etc.
We dont use any data for our own training. We send it to Gemini to process the prompt and encrypt it at rest. Having it encrypted at rest allows us to use it as context in your email thread as more replies come in.
File attachments are auto deleted after 30d.
in this website's FAQ:
>is my data secure ?
>Yes. We take data security seriously. Email content is encrypted using AES-GCM encryption.
/after we forward it to Google Gemini lol, but encrypted-at-rest is better than nothing
We encrypt data at rest, but obviously we need to send it to a LLM for processing. We make this clear in our privacy policy fwiw.
for a company, will the company policy allow to sent internal emails to your service ?
Yes, should be fine on our end. Probably not a good idea to include us on anything that requires NDAs, etc. Since that data will be fed into Gemini.
EDIT: I can't speak for your company. You should check company policies before including Subjam in threads.
god damn you for thinking of this idea before me
I spend a ridiculous amount of time at my job on emails.
I use claude and chatGPT all the time but I just don't see how language models can help me with email.
Typing the words is never the bottleneck.
The problem is trying to express what needs to be expressed with such clarity as to not cause any misunderstanding and then more email.
Summarizing an old, long thread could have some value but that is such an edge case.
I would be the main customer for this type of product but I just don't see much value in it at all.
I was surprised to see no one built it yet
Can I cc the AI?
That would be a nice way to tag it as context the bot should be aware of without eliciting a response from it, have it respond only when its a direct recipient.
Would be nice to have a natural language stack to tell a bit what I want it to do with each email. I spent way too many hours with Microsoft Power Automate just to say "when an email is from X and the subject line starts with RFQ\W+, download the attachment and save it as the matched name and add a row in a spreadsheet grabbing the tabular data from the email body" (pretty handy that I could insert data into spreadsheets without having to worry about auth, benefits of working within an ecosystem)
Yes you can pretty much do anything. CC, FWD, include humans or many AIs on the email too.
if gmail implements this feature, wouldn't your startup be at a serious disadvantage?
Yeah, probably. Or they just buy our startup if it has traction already.