Dr. Tatum - Scantech
6/26/2023
Over the course of this STEMinar, Dr. Tatum did not primarily specify his job in particular, but rather that of the product produced and a general view of the process it went through to be finalized. Of which, the products made are automated machines that airport baggage is run through to identify any dangerous content such as guns, bombs, sharp objects, unknown liquids, etc. Then it was explained how this job would corelate with STEM through having to build/design the machines which would fall under Mechanical Engineering as well as Electrical engineering, to then develop the artificial intelligence of which would have to determine the volume and mass of an object that is identified based off a 360 vertical x-ray of the baggage as well as a horizontal view which is incorporated in physics.
Following this he explained the unique aspects of the machines the company makes aside from their automation, which includes the reasoning behind the additional horizontal view which is used to identify objects that are too narrow to identify from a 360 vertical view as well as the use of differing materials to block out “noise” or what I assume as static from the images than what was used before which was a semi-conductor which didn’t take in the light as effectively. It was latterly learned how to identify metals, which are a dark blue and organic materials which a light brown. Dr. Tatum also went on to say different systems in Microsoft and Google were used to quicken the speed of the confirm/deny an image that may show a threat and emphasize that there are free courses online from accredited colleges that go over artificial intelligence and other related topics.
STEMinar alternative – Ted Talk Interview
“The Race to Build AI that Benefits Humanity”
Recorded in the April of 2021
Sam Altman – Open AI
Throughout the extent of the interview, Altman discussed current abilities and qualities of differing artificial intelligences along with popular ideas concerning AI to base further discussion upon. The initial prospects presented were those of potential capability of AI, such as presenting information in a way that is optimal for each person’s comprehension, request information, predict future findings and develop media, as well as development that would be required in order to accomplish this including determining validity of information and if requests are morally correct. Discussion then leaded into potential consequences, which would derive from both misuse of AI and inability of AI to perform effectively, as well as measures that should be made to prevent misuse through putting general restrictions on ability and providing an incentive that leads to a beneficial output. Thereafter, conversation progressed to detail the construction of the company Open AI which Altman started with co-founders that were also college students at the time which emphasized the well-meaning incentive of the organization as relevant to the production of viable technology and importance of entrepreneurship in the betterment of the world.
From this content I learned of the limitations and abilities of some current artificial intelligence systems as well as concerning possibilities and relevance in social structure to development in AI. For instance, GPT3, a native language text model developed by Open AI, was able to make sense of human commands to construct desired works with a small sample size of relations without knowing the concept of translation and in doing so, is able to develop a response that includes both complex understanding of a topic and irrelevant information. AI have a main incentive/goal to accomplish, which it completes, yet often has unintended negative side effects as a result of the task being performed including those that effect the well being of collective societies. Thus, it has great power over the population and should therefore be limited or accessible based on a government that is preferred by this populace. AI can also develop based on direct human affirmation or decline, yet currently cannot reason from a complex answer.
From this content I learned of the limitations and abilities of some current artificial intelligence systems as well as concerning possibilities and relevance in social structure to development in AI. For instance, GPT3, a native language text model developed by Open AI, was able to make sense of human commands to construct desired works with a small sample size of relations without knowing the concept of translation and in doing so, is able to develop a response that includes both complex understanding of a topic and irrelevant information. AI have a main incentive/goal to accomplish, which it completes, yet often has unintended negative side effects as a result of the task being performed including those that effect the well being of collective societies. Thus, it has great power over the population and should therefore be limited or accessible based on a government that is preferred by this populace. AI can also develop based on direct human affirmation or decline, yet currently cannot reason from a complex answer.