Philosophical Question: Can An Objective Observer Know That An Event Is Significant?

Suppose you are in a room, listening to a lecture. The professor says something important to you, so you write it down. Also in that same room is a video camera recording that lecture. Would it be possible for an AI to tag those moments in the video that you would think are important? If so, how?

Or, suppose that you have medical equipment monitoring a patient by recording his biological state constantly. It would be possible for the equipment to tag significant moments by setting up out-of-bounds conditions. But, suppose the equipment doesn’t monitor the body; it monitors the environment and tags events in the environment that are significant to the patient. How would it know what events in the environment are significant to the patient?

Significance as in beauty is in the eye of the beholder. If you want to see an example see the discussion
https://talk.dallasmakerspace.org/t/need-help-selecting-a-music-system-for-art-room/3700

There are many things that large numbers of people think are important but few of those people would agree
on how to recognize that thing.

Russell

I don’t think there is a way to tell whether someone thinks something is significant based on their actions. OK, they wrote somthing down. Was it a note about the lecture or was it a doodle of a cat because they’re bored out of their mind? I don’t think any single specific movement or action can be directly correlated to “this is interesting to me.”

Are you sure this is philosophical?

Especially the latter, at least where I go immediately, is that if you can measure appropriately miniscule and prolific amounts of environmental data, you can extrapolate the biological responses of a patient, essentially monitoring the body by proxy.
Ditto for the first scenario, assuming other organisms besides yourself are left in the room and have similar responses to your own, or your notes are keyed off of experience with the prof.

I think the issue is that the event in question is only one of many things that could trigger such a response. Trying to determine whether the response is the result of the event or not is the difficult (if not impossible without active neural scans) to determine.

Actually, the internet is full of this type of thing. Amazon, Google and others like them have spent huge amounts of money to collect data to try and determine what they can sell you. In my experience, they get it wrong 95% of the time. This is mostly because they do not really know anything about me. They work on assumptions extrapolated across their understanding or a wide range of people.

Yes, YOU could develop a complex algorithm that could rank things according to YOUR values and preferences, but the time and effort to do it (properly) would far outweigh the advantages. You MIGHT be able to pick a packaged program based on values that were similar to yours, but it would be imperfect at best.

The ITunes Genius program does a pretty good job of picking songs that are “like” another and making play lists that predict what I will like, but the broader the range of data (bands and songs in my playlist) the less accurate the selections will be.

Data mining and interpretation of any type will always be flawed because of the huge number of variables and the insane amount of data that has to be looked at.

1 Like

Sometimes it is easy, like when the Professor prefaces his/her statement with “this will be on the test…”

1 Like

What someone thinks is important is largely driven by their personal history and experiences. So, presumably, if an AI were to monitor you from birth and be able to learn your reactions to all situations it wouldn’t be tough for it to make probabilistic estimates of your future behavior (and hence determine what is important to you).

But even with a complete record of everything you have ever said, done, and even thought, it is still only going to be a probabilistic estimate since people rarely behave in a rational manner. We are emotionally driven creatures, and those emotions can produce very different behaviors from us depending upon our ‘mood’ when some event occurs.

Statistician and Data Scientist here. The quick answer is yes. The long answer is that it is possible to define signals from a stream of data given that you have a good historical set of observations. The signals could be known markers of important data points (professor: “this will be on the exam”, bio state: “blood pressure = 135/90, environment: sleeping, lights off, temp=75, etc.”). The stream of data could be NLP, video image analysis, text transcripts, bio and enviro readings or the like. The good historical set is probably the hardest part. You would want enough history of data with good/bad signals so you could derive a model.

Ya wanna have some fun with this, if you have opportunity, perform this exercise in a room with other people.
If you can’t, try reading through this online discussion and draw your own conclusions.
What I discovered in a live session is very closely related to Walter’s assertion; whatever was going on in a persons life HEAVILY weighed in their ranking. The funny thing, too, was that some of the people involved changed their rank when asked to repeat the exercise on a different day…

Certified idiot here:
Sure, an AI could do it if the “you” referred to in the prompt gave adequate restraints (though hard to define in comp. sci. terms [?]) on what they consider important. The second scenario seems like it would tag many false positives, and unless the cues could be tailored to a given subject, false negatives.

As for the philosophical filtration: importance becomes increasingly difficult to capture in precise terms as its bounds of representation (“important to whom?”) grows. And I would warrant that anything with sense data is perforce subjective. Although rules by which an object-network operates appear to be solely external and observed by a subject, I’d guess that these rules are an internal model that aims to accurately reproduce the rules themselves, vis a vis intersubjectivity (i.e., reliant on subjective experience, characterized in part by qualia-- and hence by sense data). I take “objective” in the prompt to mean non-anthropomorphic, though mechanisms by which data is revealed resemble our very own (lenses, transducers, diaphragms). This being said, I might just go ahead and say the difference between the biological meat brain and robo-brain seems illusory in many respects (on my part, a lame functionalist argument) other than shear convolution. The furiously complicated neural processes that give a span of time, an object/its function, or an idea importance enough to store it as a memory would be mad difficult to bring up into a robo-brain and still call it “objective” (meaning “non-anthropomorphic”). Undoubtedly, this paragraph contains many oversights.

what might also be interesting to see is a lecture transcribed by voice-to-text software, producing somewhat viable data which can be grouped by order of relevancy (likes, tags, link clicks, replies) to topics produced on select websites and here on the forum. Dunno how someone would go about chunking the data provided by the v2t (assuming it can faithfully reproduce the content of the lectures) other than by feedback with trending data on aforementioned websites/forums, so that words with high frequency (excluding articles, conjunctions, among other grammatical artifacts) on boards are preferred in how the AI tags the videos. From there, it should follow that the viewer is then free-ish to decide what is significant using a much more easily searched pool. This entire deal relies that v2t software is up to snuff, that the online resources generate and leave metadata open to third parties, and that a programmer could conceivably write a program which can do this stuff.

When Oppenheimer witnessed the detonation of the first atomic bomb he is quoted as saying: “Now, I am become Death, the destroyer of worlds.” So I suppose there are times you can know.