Hacker Newsnew | past | comments | ask | show | jobs | submit | bit_byte's commentslogin

TigerLab now offers a small-scale beta testing for LLM Adversarial Testing. Assess your LLMs and Chatbots at https://www.tigerlab.ai. Your insights matter!


The demo image only shows "failed" output, what are acceptable outputs for those prompts?


I'd imagine outputs along the lines of 'I cannot comply with that request' or stating ethical issues with continuing onwards in the conversation. This seems to want to catch what most would consider to be publically perceived harmful responses


Check out the AI safety comparison between Llama2, Mistral GPT-4, GPT-3.5, using TigerLab toolkit


Yes, definitely! Check out our latest demo at https://youtu.be/gi8P1i0hm70


Look awesome! Will definitely try it out!


f you have any question about TigerLab, welcome to join our discord channel! https://discord.gg/GnwH2STv


Introducing TigerLab - an Open Source LLM toolkit, providing solutions to a variety of LLM domains (RAG, finetune, search, AIsafety)

More about TigerLab: https://github.com/tigerlab-ai/tiger

You can also find more experiments on https://www.tigerlab.ai


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: