• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle

  • I’m honestly surprised that nobody has said anything about MS Office, but it’s not like I expect anyone to miss the application itself, it’s just that if your work requires you to interface with it, there really is no alternative to running Windows or MacOS. Microsoft’s own Office Online versions of the apps do a worse job of maintaining DOC/PPT formatting consistency than the possible Russian spyware that is OnlyOffice, which also screws things up too often to be relied upon. LibreOffice is, let’s be honest, a total mess (with the exception of Calc, which also isn’t consistent with the current version of Excel, but can do some things that Excel no longer can do, so I appreciate it more as a complementary tool than as a replacement).



  • The musical instrument thing is transitory and depends entirely on the instrument.

    Pre-relationship; in a popular band playing a more traditional instrument like guitar with a bunch of also attractive people (or at least part of a cool local scene) = hot

    In a relationship and/or solo bedroom producing any kind of electronic music and/or buying lots of synthesizers, drum machines or grooveboxes = not hot

    Also note how low “clubbing” is on the least attractive list, so no, DJs and electronic musicians who perform live don’t get a pass



  • There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…







  • Isn’t GPT-4o (the multimodal model currently offered by OpenAI) supposed to be able to do things like this?

    Don’t get me wrong, I think you would be better served by taking this as a fun exercise to develop your imagination and writing skills. But since it’s fanfic and presumably for personal, non-commercial purposes I would consider what you want to do to be a fair and generally ethical use of the free version of ChatGPT…




  • I am a consultant who sometimes writes code to do certain useful things as part of larger systems (parts of which may be commercial or GPL) but my clients always try to impose terms in their contracts with me which say that anything I develop immediately becomes theirs, which limits my ability to use it in my next project. I can to some extent circumvent this if I find a way to publish the work, or some essential part of it, under an MIT license. I’m never going to make money off of my code directly; at best it’s middleware, and my competitors don’t use the same stack, so I’m not giving them any real advantage… I don’t see how I’m sabotaging myself in this situation; if anything the MIT license is a way of securing my freedom and it benefits my future customers as well since I don’t have to rebuild from scratch every time.


  • Running such a bot with an intentionally underpowered language model that has been trained to mimic a specific Reddit subculture is good clean absurdist parody comedy fun if done up-front and in the open on a sub that allows it, such as r/subsimgpt2interactive, the version of r/subsimulatorgpt2 that is open to user participation.

    But yeah, fuck those ChatGPT bots. I recently posted on r/AITAH and the only response I got was obviously from a large language model… it was infuriating.