• 8 Posts
  • 1.08K Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • 91 Honda Civic CRX Si w/black gold fleck paint and a moon roof. Not rare but definitely niche.

    I LOVED this car. Super short wheel base and an absolute sleeper since it was so light. Only ICE that could get going like a EV that I’ve been in. And you were practically sitting on the road. With the door open I could easily put my palm on the ground. Even at 25mph it felt like you were speeding. So much fun to drive.
















  • I live in an at will employment state and have been a manager for quite some time. I’ve never seen an employee actually terminated for their protected status race, religion, etc. It’s always been because they had poor performance and/or attendance issues and didn’t want to get better. If you aren’t a solid average then it’s develop up or out. This isn’t my POV, this is the reality of the performance conversations I’ve been involved with. Personal accountability is a major problem these days. If you have none then you won’t have a job for long. The good news is that if you’re solid in those areas then you will be valuable to your employer. This is why so many military applicants get picked up. They have a basis for attendance and completing the mission.

    Having said that, I’m sure you’re correct and discrimination does happen because their employer lied. I just think that it doesn’t happen quite as often as believed. Many poor performers I’ve known have outright lied about why they were actually terminated.




  • They absolutely do not learn and we absolutely do know how they work. It’s pretty simple.

    Generative AI needs massive training sets that represent the kinds of things it’s asked to represent. Through the process of training, the AI learns the patterns in the data and can generate new data that fits within those patterns. It’s statistics all the way down. In the case of a Large Language Model (LLM) it’s always asking itself, “what’s the next most likely word to come after this previous word, and does that next word make sense within the context of the other words in the sentence?” The LLMs don’t necessarily understand a text as a text; that is, as a sequence of ideas unfolding logically but rather as a set of tokens that carry statistical weights.

    https://jasonheppler.org/2024/05/23/i-made-this/