• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    Here’s some of what’s happening in my country, Australia:

    Not sure where Tasmania and the ACT are at, but those links are the federal and most state government data portals.

    Behind that is much variety of data, from land use to baby names and everything in-between.

    The Australian Bureau of Statistics has its own site:

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      NZ as well: https://data.govt.nz

      Though this it takes work for the different government departments to maintain. The team at data.govt.nz work with the different government departments to try to identify suitable data sources and get them into an update cycle, but there’s definitely not all data that can be released on there.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Yeah, same kind of process in Oz.

        AFAIK, it was triggered by doing an annual event called GovHack where people were encouraged to create “hacks” with government data. It included software developers like me, data mentors from many different government departments, people with an interest and several departments with questions.

        • Dave@lemmy.nz
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          I think NZ’s is a similar story. GovHack is run in NZ as well, though I haven’t personally been involved in an event.

          • Onno (VK6FLAB)@lemmy.radio
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 month ago

            A decade ago I participated in three and won several awards but was disappointed with the government response to all our collective efforts and stopped participating.

            Specifically “not invented here” was prevalent as a response to projects that represented hundreds of man-hours of effort.

            It was demoralising to say the least.

            I’m not sure what the missing ingredient was, but two of our projects were directly related to government effort in relation to public transport and public housing. Neither went anywhere despite face to face presentations to senior stakeholders in the relevant departments.

            The third was a search engine with a completely different approach to that in use by the popular engines.

              • Onno (VK6FLAB)@lemmy.radio
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 month ago

                Using the idea of six degrees of separation to get to any person on the planet, I came up with the idea to use a word cloud that would represent the top N words in all documents.

                When you click on a word, (say “alpha”) the resulting word cloud would represent the top N words for all the documents with “alpha” in it.

                As you click, bravo -> charlie, etc. the list of documents gets smaller and smaller, until just your required document remains.

                This has several advantages, you don’t need to distinguish between words and numbers or need to “understand” the meaning of a word or interpret the user intent.

                More importantly, the user doesn’t need to know the relevant words or vocabulary, since they’re all represented in the UI.

                Enhancements include allowing for negative words, as-in, exclude documents with this word.

                • Dave@lemmy.nz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  Ah that sounds really interesting! Does it scale OK? I guess you could index at a word level and filter quite quickly for quick searches, but it seems you’re going to have to store the full text of every website?

                  • Onno (VK6FLAB)@lemmy.radio
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 month ago

                    You store just the word count for each word on each URL.

                    The search is pretty trivial in database terms since you don’t need to do any wildcard or like matching.