Hacker Newsnew | past | comments | ask | show | jobs | submit | gorkemcetin's commentslogin

The original research paper on IBM’s AI Risk Atlas defines a taxonomy of 40 AI risks. IBM then expanded the online Atlas, and the current IBM documentation lists 100 named risks. Where does the 60 number come from for IBM's list of AI risks?


This is interesting. I was thinking about building a similar solution around GRC but this time focusing on AI regulations, AI threats, breaches, 0-days etc. Out of curiosity did you use agents for this or a platform like Exa AI?


Hey, I don't rely on any agents, my approach primarily involves heuristic-based detection and fuzzing using various open data sources.


Maskwise and LLM Guard serve different stages of the AI pipeline. llm-guard basically filters prompts and responses to prevent prompt injection attacks. Maskwise is for preparing datasets before LLM training/fine-tuning. It processes large document collections (PDFs, Office docs, images) to detect & anonymize PII.

Vault is in the works :)


There is a campaign that When you subscribe to Lenny’s Newsletter, you’ll get one free year of 10 incredible products, including Bolt, Cursor, Lovable, Replit, v0, Linear, Notion, Perplexity Pro, Superhuman and Granola.

This requires an annual subscription ($200) to the newsletter, but it looks like it's worth if you are a heavy user of those tools.

You must be a new paying customer of the products to take advantage of the free year.


For a while, we’ve been developing a DePIN-powered uptime monitoring tool designed to potentially handle data from millions of devices. Our current infrastructure monitoring and uptime management service, (Checkmate) is evolving to include DePIN integration. This will allow users to burn tokens to access data from the UpRock DePIN network.

This is currently how it works under the hood:

- Connect your wallet -> Select the server you want to monitor -> Choose a geographic focus (specific cities, countries, or entire continents) for Checkmate to send ping messages.

While managing large volumes of data isn’t an issue at this stage, visualization remains a challenge. We’ve implemented MapLibre to display the data, giving users the flexibility to send one-off ping requests to the DePIN network or schedule continuous checks (e.g., every minute).

Given the novelty of this concept (similar to RIPE Atlas), visualizations will play a critical role for admins. Here's what we can currently offer on the dashboard:

- Node distribution on a map: Visualize the number of nodes per country.

- Selective probing: Choose probes directly on the map.

- Probe details: View all probes selected for a specific server.

- One-off ping tests: Perform immediate connectivity checks.

I need some feedback on how to move ahead. Since we are just a few weeks away from the general release, it would be great if I could get some thoughts. We’re considering whether this is the right balance of features or if adjustments are needed.

My immediate questions would be:

- If you had access to a global DePIN network for server monitoring, what would you prioritize seeing on the dashboard?

- Would you be interested in seeing historical logs? Like access logs going back to a specific time.

- would you want to customize packet size? (set the size of the packets being sent).

Probably there are others upcoming but I would like to start with a small UI set initially.


How would you solve the problem of collecting data from servers?


Would love to learn more about this. Could you please share some more information?


We don't sell anything.


Yes you do. You sell the "volunteering" program itself for a cost.

> The BlueWave Labs program is an incredible value at only $259 per month to cover the cost of training and mentorship.

You run a company that takes volunteer work and turns it into B2B software, which supposedly isn't even all open source ("For the closed source products, BlueWave Labs owns the code produced."), basically exchanging free labor for a letter of recommendation (but only after 6 months indentured servitude, according to the FAQ). And you charge volunteers money to do it on top of that!

So the volunteers pay the "mentors'" (i.e., company owners) salary and contribute code to your codebase for free on top of that. Amazing grift you got going on! You can pull a MongoDB at any time as soon as your products become mature enough to be worth real money to companies.

I'm not a fan of this "volunteer-built corporate software" structure. Anyone who knows how to code at all has enough skills to justify a paid internship. Service/support contracts/hosted SaaS with an open source product should be able to support paying your developers.


What are some of the issues you have seen with OTEL?


That is a good list, now just need to prioritize (after finding the ICP).


Before you start adding all of that make sure you have customers like parent poster.

For example I monitor disk space, RAM, CPU and that’s it for external tooling.

If any of that goes above thresholds someone will log into the server and use windows or Linux tooling to check what is going on.

I mostly monitor services health check endpoints so http calls to our own services. If network is down or shoddy response times of the services.

So all in all not much of servers itself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: