What is used to describe a particular phenomenon by observing it as it occurs in nature?

What is qualitative research?

Qualitative research is a process of naturalistic inquiry that seeks an in-depth understanding of social phenomena within their natural setting. It focuses on the "why" rather than the "what" of social phenomena and relies on the direct experiences of human beings as meaning-making agents in their every day lives. Rather than by logical and statistical procedures, qualitative researchers use multiple systems of inquiry for the study of human phenomena including biography, case study, historical analysis, discourse analysis, ethnography, grounded theory, and phenomenology.

University of Utah College of Nursing, (n.d.). What is qualitative research? [Guide] Retrieved from https://nursing.utah.edu/research/qualitative-research/what-is-qualitative-research.php#what 

The following video will explain the fundamentals of qualitative research.

Descriptive designis used to describe a particular phenomenon by observing it as it occursin nature. There is no experimental manipulation and the researcher does not start with ahypothesis. The goal of descriptive research is only to describe the person or object of thestudy. An example of descriptive research design isthe determination of the different kindsof physical activities and how often high school students do it during the quarantine period.The correlational designidentifies the relationship between variables. Data is collected byobservation since it does not consider the cause and effect, for example, the relationshipbetween the amount of physical activity done and student academic achievement.Ex post factodesignis used to investigate a possible relationship between previous eventsand present conditions. The term “Ex post factowhich means after the fact, looks at thepossible causes of an already occurring phenomenon. Just like the first two, there is noexperimental manipulation in this design.An example of this is “How does the parent’sacademic achievement affect the children obesity?”A quasi-experimental designis used to establish the cause and effect relationship ofvariables. Although it resembles the experimental design, the quasi-experimental has lesservalidity due to the absence of random selection and assignment of subjects. Here, theindependent variable is identified but not manipulated. The researcher does not modify pre-existing groups of subjects. The group exposed to treatment (experimental) is compared tothe group unexposed to treatment (control): example, the effects of unemployment on attitudetowards following safety protocol in ECQ declared areas.Experimental designlike quasi- experimental is used to establish the cause and effectrelationship of two or more variables. This design provides a more conclusive result becauseit uses random assignment of subjects and experimental manipulations. For example, acomparison of the effects of various blended learning to the reading comprehension ofelementary pupils.4

Foreword

Stuart K. Card, in Designing with the Mind in Mind (Second Edition), 2014

Many natural phenomena are easy to understand and exploit by simple observation or modest tinkering. No science needed. But some, like capacitance, are much less obvious, and then you really need science to understand them. In some cases, the HCI system that is built generates its own phenomena, and you need science to understand the unexpected, emergent properties of seemingly obvious things. People sometimes believe that because they can intuitively understand the easy cases (e.g., with usability testing), they can understand all the cases. But this is not necessarily true. The natural phenomena to be exploited in HCI range from abstractions of computer science, such as the notion of the working set, to psychological theories of human cognition, perception, and movement, such as the nature of vision. Psychology, the area addressed by this book, is an area with an especially messy and at times contradictory literature, but it is also especially rich in phenomena that can be exploited for HCI technology.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124079144060012

Time, Anthropology of

E.K. Silverman, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Universal Time

Certain natural phenomena appear to be universally temporalized in terms of periodicity and, in some but not all cultures, progressive quantification: the human lifecycle and bodily processes (e.g., menstruation), seasonality, celestial patterns, day/night, and so forth. The same is true for the reproduction of social order by generation and various social and kinship groups. All societies regularly coordinate labor, occasionally punctuate the tempo of everyday life with ritual, and envision some type of past, present, and future. Calendrical arrangements of days, weeks, months, and years are also universal—but, again, not everywhere counted. In anthropological theory, time is implicitly understood to be binary. For example, all cultures accommodate conceptual categories and linguistic terms for duration and sequence, which are often said to be the basic forms of time. Leach (1961) proposed that time is a social construction rooted in two basic human experiences: the repetitions of nature, and the irreversibility of life. Many anthropologists contend that all societies possess both incremental/linear/irreversible and episodic/cyclical/repetitive forms of time—which are not necessarily antithetical since time cycles can return to the ‘same logical, not temporal, point’ (Howe 1981). While Farriss (1987) claims that any one mode can incorporate the other, Leach proposes that all religions deny the finality of human mortality by subsuming linear time under a cyclical framework. Birth and death become two phases of an eternal sequence. Leach also claimed that time is everywhere a ‘sequence of oscillations between polar opposites.’ This temporal pendulum is related to basic sociological processes such as reciprocal gift-exchange and marriage. More recently, Gell (1992) argued that all culture-specific patterns of temporality are variations of two cognitive modes: an A-series which orders events according to relative notions of pastness, presentness, and futurity, and a B-series of absolute before/after.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767009797

Auxiliary Equipment in CCTV

Vlado Damjanovski, in CCTV (Third Edition), 2014

Lightning protectors

Lightning is a natural phenomenon about which there is not much we can do to prevent. Lightning induces strong electromagnetic forces in copper cables. The closer it is the stronger the induction is. PTZ sites are particularly vulnerable because they have copper video, power and control cables concentrated in the one area. A good and proper earthing is strongly recommended in areas where intensive lightning occurs, and of course surge arresters (also known as spark or lightning arresters) should be put inside all the system channels (control, video, etc.). Most good PTZ site drivers have spark arresters built in at the data input terminals and/or galvanic isolation through the communication transformers.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

A coax lightning protector

Spark arresters are special devices made of two electrodes, which are connected to the two ends of a broken cable, housed in a special gas tube that allows excessive voltage induced by lightning to discharge through it. They are helpful, but they do not offer 100% protection.

An important characteristic of lightning is that it is dangerous not only when it directly hits the camera or cable but also when it strikes within close range. The probability of having a direct lightning hit is close to zero. The more likely situation is that lightning will strike close by (within a couple of hundred meters of the camera) and induce high voltage in all copper wires in the vicinity. The induction produced by such a discharge is sufficient to cause irreparable damage. Lightning measuring over 10,000,000 V and 1,000,000 A are possible so one can imagine the induction it can create.

Again, as with the ground loops, the best protection from lightning is using a fiber-optic cable; with no metal connection, no induction is possible.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124045576500124

Taxonomy and Framework for Integrating Dependability and Security1

Jiankun Hu, ... Zahir Tari, in Information Assurance, 2008

NFUA.

There are human-made faults that do not belong to FUA. Most of such faults are introduced by error, such as configuration problems, incompetence issues, accidents, and so on. Fault detection activity, including penetration testing, is not considered to be a fault itself, as it does not cause system output to deviate from its desired trajectory. Nonrepudiation fault also belongs to the NFUA category, as it normally has an authorized access.

Nonhuman-made faults (NHMF).

NHMF refers to faults caused by natural phenomena without human participation. These are physical faults caused by a system's internal natural processes (e.g., physical deterioration of cables or circuitry), or by external natural processes. The latter ones originate outside a system but cross system boundaries and affect the hardware either directly, such as radiation, or via user interfaces, such as input noise [7]. Communication faults are an important part of the picture. They can also be caused by natural phenomena. For example, in communication systems, a radio transmission message can be destroyed by an outer space radiation burst, which results in system faults, but has nothing to do with system hardware or software faults. Such faults have not been discussed before in the existing literature.

From above discussions, we propose the following elementary fault classes, as shown in Figure 6.5. From these elementary fault classes, we can construct a tree representation of various faults, as shown in Figure 6.6.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

FIGURE 6.5. Elementary fault classes.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

FIGURE 6.6. Tree representation of faults.

Figure 6.7 shows different types of availability faults. The Boolean operation block performs either “Or” or “And” operations or both on the inputs. We provide several examples to explain the above structure. We consider the case when the Boolean operation box is performing “Or” operations. F1.1 (a malicious attempt fault with intent to availability damage) combined with software faults will cause an availability fault. A typical example is the Zotob virus that can lead to shutting down the Windows operation system. It gains access to the system via a software fault (buffer overflow) in Microsoft's plug-and-play software, and attempts to establish permanent access to the system (back door). F1.1 in combination with hardware faults can also cause an availability fault. F7 (natural faults) can cause an availability fault. F1.1 and F8 (networking protocol) can cause a denial of service fault. Figure 6.8 shows the types of integrity faults.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

FIGURE 6.7. Detailed structure of S1.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

FIGURE 6.8. Detailed structure of S2.

The interpretation of S2 is similar to that of S1. The combination of F1.2 and F2 can alter the function of the software and generate an integrity fault. Combining F1.2 and F4 can generate a person-in-the-middle attack and so on. Figure 6.9 shows types of confidentiality faults.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

FIGURE 6.9. Detailed structure of S3.

The interpretation of S3 is very similar to those of S1 and S2. Combination of F1.3 and F2 can generate a spying type of virus that steals users' logins and passwords. It is easy to deduce other combinations.

Now let us look at the complex case of a Trojan horse. The Trojan horse may remain quiet for a long time or even forever, and so it will not cause service failure during the quiet period. This is hard to model by conventional frameworks. Within our framework, we need to observe two factors first for the classification. The first factor is the consequence of introducing the Trojan horse, that is, whether it causes a fault or combination of faults, such as availability, integrity, and confidentiality faults. If there is no consequence (i.e., no service deviation error) after introducing it then it is not considered as a fault. This conforms to the basic definition of faults. The second factor is whether the intrusion belongs to a malicious attempt. Apparently, a network scan by the system administrator is not considered as a fault. When the objective of a Trojan horse is not malicious and it never affects system service, it is not considered as a fault in our framework. Such scenarios have not been addressed properly in many other frameworks where exploit-type activities are characterized as faults even though they may never cause service deviation. If, however, a Trojan horse has a malicious attempt fault and does cause service deviation, then it is considered as a fault classified by S1, S2, and S3 components.

Because a service failure is mainly due to faults, we concentrate our discussion on faults and means to attain fault prevention, fault tolerance, fault detection, and fault removal in this chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123735669500082

Human Interfaces

Hartmut Haberland, in Human Factors in Information Technology, 1999

CHALLENGING THE TRADITIONAL VIEW

The traditional view gives languages as natural phenomena a privileged status vis-à-vis their artificial extensions. Maybe not even this is historically correct: it has been suggested that ‘artificial’ writing could be at least as old as ‘natural’ speaking, since we cannot know if the earliest hominids used their vocal organs for symbolic expression prior to their producing visible marks on stone and the like. This is, of course, speculative; but it is easier to argue why the traditional view is problematic, viz. that there is a naturally developed core in human language that is barely affected by conscious design. The idea of a natural core of language can only be maintained if we consider language use as external to the language system proper. The key idea that helps us to understand this is that human beings are not just natural, biological creatures that live in an artificial, technical environment which they have created. Neurobiologists point out (e.g. Changeux 1985) that our biological basis is not simply unaffected by the environment shaped by humans; the relationship goes both ways. Hence we cannot simply divide phenomena pertaining to humans into biologically given, natural ones and human-created artificial ones. When we realise that artificial phenomena shape phenomena perceived as natural, the latter’s naturalness can be doubted seriously.

The distinction between the natural and the artificial also disregards that human beings live in a society that they have not created individually, but collectively. The fact that they do not have created society (or its manifestations like language) individually may sometimes lead them to assume that these have not been created at all, or at least not by anything human; hence they must have been there all the time, must be ‘natural’. What human beings have created, and know that they have created, must then be of a totally different order, be ‘artificial’.

All linguistic manifestations are tools of the human mind, and as such they are partly devised consciously, ‘planned’, partly internalised and spontaneous. Not even Good’s example of face to face conversation is fully spontaneous and ‘natural’. The asymmetry which is so obvious in Human–Machine–Interaction where the Human agent is privileged (since humans can program machines, and machines cannot program humans) is also present in apparently innocent face to face encounters. Only an idealised dominance-free dialogue in the sense of Habermas (which always has been conceived as an ideal model, not an empirically observable fact) could be truly natural and symmetrical; concrete humans that interact are always under constraints and outer pressure which usually puts one of the interlocutors in a position of power.

Even in sociolinguistics, doubt is growing about the feasibility of a distinction between the ‘natural’ and the ‘unnatural’, given that historical sociolinguistic states are always under some conscious control from humans. This does not, however, mean that ‘natural’ sociolinguistic states exist, as if language, if untampered with, could develop fully spontaneously and beyond the control of the societal mind. Naturalness is an analytic construct, not something that will unfold by itself when one leaves one’s language alone.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0923843399800082

RIoT Control

Tyson Macaulay, in RIoT Control, 2017

Fractal Security

Wikipedia defines a fractal as “a natural phenomenon or a mathematical set that exhibits a repeating pattern that displays at every scale.” Fractal security is about repeating security structures at different scales and repeating the same structures at different points in the infrastructure. The main benefits from applying operational security designs that repeat and scale up and down will be:

Strength. Fractal patterns once established are known to create strong physical forms by repeating stable properties uniformly through a physical structure. We make an assumption that the same will hold true logically (in networks and virtualized structures).

Operationally efficient security. Operational tools and techniques can be developed that are uniform but scale according to the system under management.

Repeatability. Fractal security is repeatable and therefore scalable and economical security.

A fractal security would mean that a carrier-level security system should be recognizable at the enterprise level, server message block (SMB), and consumer/home level. This will be important in the IoT, where communications will be constant and both north-south in nature (data traveling to and from public networks) and east-west in nature (locally switching to allow devices to communicate with each other).

In the IoT, many devices will be utilizing the same shared infrastructure in the DC, cloud, network, and gateways, while endpoint devices will be unique. Therefore, if security is not consistent (fractal-like) across assets like DC, cloud, network, and gateways (north-south), and also within those assets (east-west, intra-system communications in the DC or localized switching in a LAN, office branch, or home environment), then threat agents will attack the weakest links like a flaw. Network segmentation and microsegmentation implemented across different assets like the DC, cloud, network, and home gateways might be a form of fractal security (see Figs. 13.22 and 13.23).

What is used to describe a particular phenomenon by observing it as it occurs in nature?

Figure 13.22. Segmentation and fractal-like security.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

Figure 13.23. Microsegmentation and fractal-like security.

A fractal-like security system will present a flat attack surface without handholds. The weakness in this model is that a flaw will affect all fractals. One way to address such an issue is to use the reoccuring geometry but use different elements. For instance, the same reference designs could be applied with different mixtures of vendor products; not too many vendors to make operational costs too high (which is typical), just enough to avoid a monoculture, for instance two to three.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124199712000133

Light and Television

Vlado Damjanovski, in CCTV (Third Edition), 2014

A little bit of history

Light is one of the basic and greatest natural phenomena, vital not only for life on this planet, but also very important for the technical advancement and ingenuity of the human mind in the visual communication areas: photography, cinematography, television, and multimedia. The main source of light for our planet is our closest star – the sun.

Even though it is so “basic” and we see it all the time around us, light is the single biggest stumbling block of science. Physics, from a very simple and straightforward science at the end of the nineteenth century, became very complex and mystical. It forced the scientists in the beginning of the twentieth century to introduce the postulates of quantum physics, the “principles of uncertainty of the atoms,” and much more – all in order to get a theoretical apparatus that would satisfy a lot of practical experiments but, equally, make sense to the human mind.

This book is not written with the intent of going deeper into each of these theories, but rather I will discuss the aspects that affect video signals and CCTV.

The major “problem” scientists face when researching light is that it acts as dual nature: it behaves as though it is a wave – through the effects of refraction and reflection – but it also appears as though it has particle nature – through the well-known photo-effect discovered by Heinrich Hertz in the nineteenth century and explained by Albert Einstein in 1905. As a result, the latest trends in physics are to accept light as a phenomenon of a “dual nature.”

It would be fair at this stage, however, to give credit to at least a few major scientists in the development of physics, and light theorists in particular, without whose work it would have been impossible to attain today’s level of technology.

Isaac Newton was one of the first physicists to explain many natural phenomena including light. In the seventeenth century he explained that light has a particle nature. This was until Christian Huygens, later in that century, proposed an explanation of light behavior through the wave theory. Many scientists had deep respect for Newton and did not change their views until the very beginning of the nineteenth century when Thomas Young demonstrated the interference behavior of light. August Fresnel also performed some very convincing experiments that clearly showed that light has a wave nature.

A very important milestone was the appearance of James Clerk Maxwell on the scientific scene, who in 1873 asserted that light was a form of high-frequency electromagnetic wave. His theory predicted the speed of light as we know it today: 300,000 km/s. With the experiments of Heinrich Hertz, Maxwell’s theory was confirmed. Hertz, however, discovered an effect that is known as the photo-effect, where light can eject electrons from a metal whose surface is exposed to light. However, it was difficult to explain the fact that the energy with which the electrons were ejected was independent of the light intensity, which was in turn contradictory to the wave theory. With the wave theory, the explanation would be that more light should add more energy to the ejected electrons.

This stumbling block was satisfactorily explained by Einstein who used the concept of Max Planck’s theory of quantum energy of photons, which represent minimum packets of energy carried by the light itself. With this theory, light was given its dual nature, that is, some of the features of waves combined with some of the features of particles.

This theory so far is the best explanation for the majority of light behavior, and that is why in CCTV we apply this “dual approach” theory to light. So, on one hand, in explaining the concepts of lenses we will be using, most of the time, the wave theory of light. On the other, the principles imaging chips operation (CCD or CMOS), for example, based on the light’s particle (material) behavior.

Clearly, in practice, light is a mixture of both approaches, and we should always have in mind that they do not exclude each other.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124045576500021

Explore, Explain, Design

Andrew S. Gibbons, C. Victor Bunderson, in Encyclopedia of Social Measurement, 2005

Exploratory Research Leads to Explanatory Research

As the activities of natural history measure and catalog natural phenomena, patterns become evident, requiring explanations of causal relationships, origins, and interdependencies. For example, when paleontological research on both sides of the Atlantic revealed the types of prehistoric animal and plant life that had once inhabited those regions, a pattern of relationship became evident to the scientist Alfred Wegener, and to others, that ran directly contrary to the prevailing explanatory theories of the time regarding the origin and history of the continents. To Wegener, the only explanation that fit all of the observations was that the separate continents had been joined at one point in the past but had drifted apart. Though his explanatory theory of continental drift was dismissed by opinion leaders, additional evidence that supported Wegener's theory appeared many years later when the Atlantic sea floor was being mapped for the first time. Clear signs of sea-floor spreading gave a new relevance to Wegener's theory. What is important here is not that Wegener's theory triumphed, but that it was the description—the sea-floor mapping—of natural phenomena that led to the ultimate reconsideration of Wegener's theory.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985000177

Geostatistics

Saman Maroufpoor, ... Xuefeng Chu, in Handbook of Probabilistic Models, 2020

Abstract

Humans have always been seeking to obtain enough information about the natural phenomena. In this regard, required tools have been created. The restrictions of the tools are important factors that affect the amount of exact gathered information. To overcome these limitations, efficient mathematical and statistical models (e.g., geostatistical and deterministic methods) have been developed. In geostatistical methods, the spatial structure of the data is studied because data are related to the community under study. In this chapter, the concepts of geostatistics are presented with focus on a variogram and a variety of Kriging methods.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128165140000096

COMPUTER SIMULATION OF STRUCTURAL ANALYSIS IN CIVIL ENGINEERING

Jiang Jian-Jing, ... Hua Bin, in Computational Mechanics in Structural Engineering, 1999

INTRODUCTION

Computers are used widely to simulate the objective world, including natural phenomenon, system engineering, kinematics principles and even the human brain. Though civil engineering is a traditional trade, computer simulation has been applied successfully, especially in structural analysis. Three prerequisites are needed to perform structural analysis: (1) Constitutive law of specific material, which can be obtained by small-scale test; (2) Effective numerical method, such as finite element method (FEM), direct integration, etc.; (3) Graph display software and visual system. Figure 1 shows the philosophy of computer simulation in structural analysis. The following parts give a comprehensive explanation of several aspects.

What is used to describe a particular phenomenon by observing it as it occurs in nature?

Figure 1. Philosophy of computer simulation in structural analysis

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080430089500582

What is the phenomenon in qualitative research?

A phenomenon (plural, phenomena) is a general result that has been observed reliably in systematic empirical research. In essence, it is an established answer to a research question.
Correlational Research This type of research will recognize trends and patterns in data, but it does not go so far in its analysis to observe the different patterns. Correlational research sometimes considered a type of descriptive research as no variables are manipulated in the study.

What are the 4 types of research design?

There are four main types of Quantitative research: Descriptive, Correlational, Causal-Comparative/Quasi-Experimental, and Experimental Research. attempts to establish cause- effect relationships among the variables. These types of design are very similar to true experiments, but with some key differences.

Which data is obtained through a research relating to a phenomenon over a time period?

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents.