My work thinks theyre being forward thinking by shoving ai into everything. Not that we are forced to use it but we are encouraged to. Outside of using it to convert a screenshot to text (not even ai…thats just ocr) I haven’t had much use for it since it’s wrong a lot. Its pretty useless for the type of one off work I do as well. We are supposed to share any “wins” we’ve had but I’d sooner they stop paying a huge subscription to Sammy A.

  • baggachipz@sh.itjust.works
    cake
    link
    fedilink
    arrow-up
    4
    ·
    12 hours ago

    We have a company-wide SMART goal to “utilize AI in our daily work”. Not sure how that can be proven, but whatevs. GitHub Copilot cost me an hour just yesterday, where it put in a semicolon that doesn’t belong and was very tough to track down. It’s nice when it works, but it will cost you lots of time when it doesn’t. I guess I fulfilled my SMART goal?

  • iarigby@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    14 hours ago

    Mine, but they invested in an agent instead of just an LLM chat. I’m not interested in the chat based ones, I don’t think they provide much value, but Claude Code is genuinely a beast. My jaw drops every time I use it and it just does things for me, with a clear task list that it updates and confirmation/edit prompt when editing files or running bash commands. I can delegate all my boring or precisely defined tasks to it and focus instead on coding and finding solutions to interesting problems.

    For example, last week I found a function parameter that was not in use. I usually go through git history to check when/why it was removed, but it’s not something I do often and going on command hunting every time is just not exciting. I gave it following prompt

    “in file x, function a, there is parameter b which is no longer used. Go through git history and find the commit when it was deleted from the function body

    and I was able to have a two minute break while it executed git and grep commands. Yes it is a small thing, but then there are many small things like that throughout the day and not having to do them saves a lot of mental capacity.

  • Treczoks@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    15 hours ago

    On one occasion I ran into a problem that took me weeks to solve. I had exhausted all kinds of support already, so people (including my boss) suggested to try ChatGPT. I had a difficult time to explain to them that this simply would be no help.

    In the end I fixed this myself with reading thousands of helpdesk articles and at one point drawing the right conclusion, but it was so dependent on local variables that an LLM would not have been able to find a solution here.

  • TootSweet@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    8
    ·
    23 hours ago

    My company recently announced to the whole IT department that they’re contracting with Google to get Gemini for writing code and stuff. They had someone from Google even give a presentation rife with all kinds of propaganda about how much Gemini will “help” us write code. Demoed the IntelliJ integration and everything. I wouldn’t say we were “asked” to use it, but we were definitely “encouraged” to." But since then, there’s been no information on how actually to use our company-provided Gemini license/integration/whatever. So I don’t think anyone’s using it yet.

    I’d love to tell everyone on my team not to use it, and I am kindof “in charge” of my team a bit. But it’s not like there aren’t many (too many) levels of management above me. And it’s clear they wouldn’t have my back if I put my foot down about that. So I’ve told my team not to commit any code unless they understand it as well as they would had they written it themselves. I figure that’s sufficiently noncommittal that the pro-Gemini upper management won’t have a problem with it while also (assuming anyone on my team heeds it) minimizing the damage.

  • Boomkop3@reddthat.com
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    1 day ago

    My job has been paying hundreds of euros per person for access to microsoft’s llm. And it really is just a toy at this point. It’s been helpful to me a handful of times over the last year. A better investment would be a non-shitty version of google

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      18
      ·
      1 day ago

      No fucking kidding. I hate how bad internet search has become. Time to pay for kagi I guess…back in my day you could actually search the internet for actual content and not slop and ads.

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      That’s not just a red flag, it’s a bunch. And they’re being waved by a marching band. I’d start looking for new work if I were you, because your company’s leadership is dangerously incompetent.

      AI is a “time saver” in only very narrow circumstances and only in the hands of certain people who are already at the top of their fields. You need to be able to immediately recognize when it’s wrong, because if you don’t, it costs time and, worse, introdruces major flaws into work.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    21
    ·
    1 day ago

    My boss tried to get us to use the commercial ChatGPT account he got for the business, but within two months everybody stopped using it.

    He didn’t try to mandate it, but he did encourage us to try it. In the end it was just more work than doing what we needed to do ourselves without its “assistance”.

  • salvaria@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    My boss got us a ChatGPT corporate workspace or something? They gave me access to which I promptly “forgot” (read: intentionally did not use). Then the other day, I asked my boss for help with a tricky piece of coding and they screenshared while they showed themselves directly typing my question into ChatGPT and sending me its answer…

    I tried using it once for a different piece of tricky coding. I tried arguing with it, having to tell it multiple times that it was making the same error over and over with each new “totally corrected” answer it gave me. I gave up after an hour without any new steps in a correct direction.

  • GrayBackgroundMusic@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Yep. It’s level 0 tech support, HR, etc. It’s about 50% successful. Then when it fails, it connects you to a person.

    • adhocfungus@midwest.social
      link
      fedilink
      arrow-up
      4
      ·
      10 hours ago

      I wouldn’t be so opposed to it if this was the case with Copilot, but at my job it never “fails”. It never says, “I don’t have enough data on that,” or, “You should contact an appropriate resource.” It always has an answer that is very confidently portrayed.

      Now I’m flooded with tickets from users saying, “I followed Copilot’s instructions and this still didn’t work,” with screenshots of Copilot where they asked it how to do something that is impossible with our software. Then I have to argue with them about it because they believe the LLM over IT. Or users asking for permission to see a button/link that doesn’t exist because the last 50% of the steps are pure hallucinations.

  • driving_crooner
    link
    fedilink
    arrow-up
    4
    ·
    1 day ago

    The company I work for have their own LLM AI. ChatGPT, Deepseek, Claude and others are all blocked. Our own works ok, and is there to anyone who wants to use it, nobody is forced to use it.

  • crawancon@lemm.ee
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 day ago

    I’m on our copilot demo team so I assist some business folks utilize it in teams or office a little. mostly I design controls around limiting its impact but all users have access to build agents by default which is dumb. so yeah, not my full time gig by any means but extracting “value” is subjective and it takes time to build some use cases.

  • Yaky@slrpnk.net
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    Haven’t been asked yet, but company is in process of adding “AI” (as ambiguous as that) into many business processes. (Presented right after presentation on sustainability…)

    I know many developers love LLMs, but they seem so useless to me - LLM is not gonna fix tech debt, wonky git issues or know how to query a complex 20+ year old DB. I have access to a LLM and would not know what to ask that I could not do myself.

  • Remy Rose@piefed.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Not my department specifically, but my organization has. And I work for a county government, so… be worried about that going forward, I’d say.

  • potoo22@programming.dev
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    As a programmer, I got some pretty good use cases. GitHub Copilot has been hit or miss. It can do tedious, common coding well, but the less common it is, the more trouble it has.

  • dwemthy@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    Encouraged to use it, had a vibe coding hackathon, released a new product six months ago built on a ChatGPT integration