Chat AI FAQ
Data Privacy
Are my conversations or usage data used for AI training or similar purposes?
No, whether you use internal or external models, your conversations and data are not used to train any AI models.
When using internal models, are my messages and conversations stored on your servers at any stage?
No, user messages and AI responses are not stored at any stage on our servers. Once your message is sent and you receive the response, the conversation is only available in your browser.
What data does Chat AI keep when I access the service?
We do not keep any conversations or messages on our servers. We only record some usage stastistics, in order to monitor the load on our service and improve the user experience. This includes usernames, timestamps, and the models/services that were requested. Everything else the ChatBot remembers (like the history of your last conversation, etc.) is only stored locally in your browser.
When using external models, are my messages and conversations stored on Microsoft’s servers at any stage?
While we do not keep any conversations or messages on our servers, Microsoft retains the right to store messages/conversations for up to 30 days in order to prevent abuse. Since the request is sent directly from GWDG’s servers, no user information is included in the requests to Microsoft. For more information, see: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy
Availability
My institution is interested in using Chat AI. Can we advertise it to our users? Would you be able to handle an additional load for XXX users?
For large institutions, please contact us directly at info@kisski.de
Are all the models available with Chat-AI for free?
All models accessible to a user with an AcademicCloud account are for free, with the exception of the OpenAI GPT… (external) models. These models are only freely available to public universities and research institutes in Lower Saxony and the Max Planck Society.
Usage
Why is model xxx taking longer than usual to respond?
There can be multiple reasons for this.
- Most likely, your conversation history became larger over time and you didn’t clear it. Note that each time you send a message, the entire conversation history has to be processed by the model, which means a longer conversation history takes longer to process, and also uses more input tokens.
- If the model responds slowly even when the conversation is empty, it could be due to high load, esp. during peak hours, or an issue with the hardware running the model on our infrastructure. You can wait a little or switch to a different model and see if the response time improves. Feel free to reach out to support if the problem persists.
- Check your internet connection. It’s possible that this is caused by a slow or high-latency connection, esp. if you notice no difference when changing the model.
Can Chat AI process my images?
Yes, as long as the model supports it. Simply select a model that supports image input, as illustrated with the camera icon, then attach an image using the picture button in the prompt textbox. Note that some models may not support attaching more than one image at a time.
Can Chat AI process my PDF files?
Yes! Simply use the “attach text” button in the prompt textbox and select your PDF file. You will see the file in your attachments list as well as a “process” button. Note that PDF files must be processed before you can send a message to the model. Depending on the size and contents of your PDF file, this may take a while, or even fail if the file is too large. Once the file is processed, you can simply send a message to the model and its contents will be attached to your message.
OpenAI models
Can I use my own system prompts with the OpenAI (external) models?
No, sorry. The system prompt used by the OpenAI models can’t be changed by end users. Please use our internal models if you need to set custom system prompts.
Why is o1 and o1-mini slower / why can’t I get responses from o1 and o1-mini?
The o1 and o1-mini models have internal reasoning, meaning they need much more time to process a request. Furthermore, Microsoft’s API does not support streaming for these models yet, therefore Chat AI has to wait until the entire response is generated by the model before any data is received. In some cases, especially when there is a long conversation history, this can take so long that the connection times out and the request fails with a “Service Unavailable” error.
Are the OpenAI GPT… (external) models the real ChatGPT/GPT-4/… by OpenAI?
They are. We have signed a contract with Microsoft to be able to access the models running in their Azure cloud. Since the service costs money, it is only available to users in Lower Saxony or from the Max Planck Society. Thank you for your understanding.
Why does GPT-4 refer to itself as GPT-3 when I ask what model it is?
This is a known issue when using GPT-4 via it’s API, see: https://community.openai.com/t/gpt-4-through-api-says-its-gpt-3/286881 Nevertheless, the model is in fact GPT-4, even if it states otherwise.