• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Valve is a unique company with no traditional hierarchy. In business school, I read a very interesting Harvard Business Review article on the subject. Unfortunately it’s locked behind a paywall, but this is Google AI’s summary of the article which I confirm to be true from what I remember:

    According to a Harvard Business Review article from 2013, Valve, the gaming company that created Half Life and Portal, has a unique organizational structure that includes a flat management system called “Flatland”. This structure eliminates traditional hierarchies and bosses, allowing employees to choose their own projects and have autonomy. Other features of Valve’s structure include:

    • Self-allocated time: Employees have complete control over how they allocate their time
    • No managers: There is no managerial oversight
    • Fluid structure: Desks have wheels so employees can easily move between teams, or “cabals”
    • Peer-based performance reviews: Employees evaluate each other’s performance and stack rank them
    • Hiring: Valve has a unique hiring process that supports recruiting people with a variety of skills



  • I am a pilot and this is NOT how autopilot works.

    There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.

    The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).

    An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.

    Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.



  • I’m an AI researcher at one of the world’s top universities on the topic. While you are correct that no AI has demonstrated self-agency, it doesn’t mean that it won’t imitate such actions.

    These days, when people think AI, they mostly are referring to Language Models as these are what most people will interact with. A language model is trained on a corpus of documents. In the event of Large Language Models like ChatGPT, they are trained on just about any written document in existence. This includes Hollywood scripts and short stories concerning sentient AI.

    If put in the right starting conditions by a user, any language model will start to behave as if it were sentient, imitating the training data from its corpus. This could have serious consequences if not protected against.