Difference between revisions of "Time"
m |
m (→Light Clock 1905 - Einstein's Thought Experiment) |
||
| (118 intermediate revisions by the same user not shown) | |||
| Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
| − | |keywords=artificial, intelligence, machine, learning, models | + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools |
| − | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |
| + | <!-- Google tag (gtag.js) --> | ||
| + | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> | ||
| + | <script> | ||
| + | window.dataLayer = window.dataLayer || []; | ||
| + | function gtag(){dataLayer.push(arguments);} | ||
| + | gtag('js', new Date()); | ||
| + | |||
| + | gtag('config', 'G-4GCWLBVJ7T'); | ||
| + | </script> | ||
}} | }} | ||
| − | [https://www.youtube.com/results?search_query=clock+time+ | + | [https://www.youtube.com/results?search_query=ai+clock+time+keep+GPS+position+~navigation+timing YouTube] |
| − | [https://www.google.com/search?q=clock+time+ | + | [https://www.quora.com/search?q=time ... Quora] |
| + | [https://www.google.com/search?q=ai+clock+time+keep+GPS+position+~navigationtiming ...Google search] | ||
| + | [https://news.google.com/search?q=ai+clock+time+keep+GPS+position+~navigationtiming ...Google News] | ||
| + | [https://www.bing.com/news/search?q=ai+clock+time+keep+GPS+position+~navigationtiming&qft=interval%3d%228%22 ...Bing News] | ||
| − | * [[ | + | * [[Time]] ... [[Time#Positioning, Navigation and Timing (PNT)|PNT]] ... [[Time#Global Positioning System (GPS)|GPS]] ... [[Causation vs. Correlation#Retrocausality| Retrocausality]] ... [[Quantum#Delayed Choice Quantum Eraser|Delayed Choice Quantum Eraser]] ... [[Quantum]] |
| − | + | * [[Government Services]]: | |
| − | ** [https://www.npl.co.uk/ntc National Timing Centre] ... Assured Time and Frequency for the UK | + | ** [[National Institute of Standards and Technology (NIST)]] ... [https://www.nist.gov/pml/time-and-frequency-division Time and Frequency Division, Physical Measurement Laboratory] |
| + | ** [[U.S. Department of Homeland Security (DHS)]] ... [https://www.dhs.gov/science-and-technology/pnt-program Science and Technology (S&T) Positioning, Navigation, and Timing (PNT) Program] | ||
| + | ** [[Defense]] ... [https://www.cnmoc.usff.navy.mil/Our-Commands/United-States-Naval-Observatory/Precise-Time-Department/ Precise Time Department ... U.S. Naval Observatory has maintained a Time Service Department since 1880] | ||
| + | * [[Perspective]] ... [[Context]] ... [[In-Context Learning (ICL)]] ... [[Transfer Learning]] ... [[Out-of-Distribution (OOD) Generalization]] | ||
| + | * [https://www.npl.co.uk/ntc National Timing Centre] ... Assured Time and Frequency for the UK | ||
* [https://en.wikipedia.org/wiki/Time Time] ...[https://en.wikipedia.org/wiki/Coordinated_Universal_Time Coordinated Universal Time UTC] ... [https://en.wikipedia.org/wiki/Clock Clock] ...[https://en.wikipedia.org/wiki/History_of_timekeeping_devices Timekeeping | Wikipedia] | * [https://en.wikipedia.org/wiki/Time Time] ...[https://en.wikipedia.org/wiki/Coordinated_Universal_Time Coordinated Universal Time UTC] ... [https://en.wikipedia.org/wiki/Clock Clock] ...[https://en.wikipedia.org/wiki/History_of_timekeeping_devices Timekeeping | Wikipedia] | ||
* [https://interestingengineering.com/the-very-long-and-fascinating-history-of-clocks The Very Long and Fascinating History of Clocks | Christopher McFadden - Interesting Engineering] | * [https://interestingengineering.com/the-very-long-and-fascinating-history-of-clocks The Very Long and Fascinating History of Clocks | Christopher McFadden - Interesting Engineering] | ||
| Line 22: | Line 38: | ||
** [https://www.crownsterling.io/ Crown Sterling] ... changing the face of digital security with its non-integer-based algorithms that leverage time, AI and irrational numbers. | ** [https://www.crownsterling.io/ Crown Sterling] ... changing the face of digital security with its non-integer-based algorithms that leverage time, AI and irrational numbers. | ||
** [https://www.csoonline.com/article/3235970/what-is-quantum-cryptography-it-s-no-silver-bullet-but-could-improve-security.html Quantum cryptography] ... the infosec industry looks to quantum cryptography and quantum key distribution (QKD) | ** [https://www.csoonline.com/article/3235970/what-is-quantum-cryptography-it-s-no-silver-bullet-but-could-improve-security.html Quantum cryptography] ... the infosec industry looks to quantum cryptography and quantum key distribution (QKD) | ||
| − | * Time Series | + | * [https://spectrum.ieee.org/qa-creating-time-crystals-using-quantum-computers What’s a Time Crystal? | Charles Q. Choi - IEEE Spectrum] ... And how do Google researchers use quantum computers to make them? ... quantum system of many particles that organize themselves into a periodic pattern of motion—periodic in time rather than in space—that persists in perpetuity. |
| − | + | * [https://spectrum.ieee.org/time-reversal-interface This Mirror Reverses How Light Travels in Time There are already applications in wireless, radar, and optical-computing | Charles Q. Choi - IEEE Spectrum] ... There are already applications in wireless, radar, and optical-computing ... These applications often reverse the order of signals to help process them. | |
| − | ** [https://blog.netsil.com/a-comparison-of-time-series-databases-and-netsils-use-of-druid-db805d471206 A Comparison of Time Series Databases and Netsil’s Use of Druid | Netsil] | + | |
| − | + | __NOTOC__ | |
| − | + | ||
| − | + | = Sequence/Time-based Algorithms = | |
| − | + | * [https://www.advancinganalytics.co.uk/blog/2021/06/22/10-incredibly-useful-time-series-forecasting-algorithms 10 Incredibly Useful Time Series Forecasting Algorithms] | |
| − | + | * [https://www.tableau.com/data-insights/ai/algorithms Artificial intelligence (AI) algorithms: a complete overview] | |
| − | + | * [https://science.nasa.gov/technology/technology-highlights/new-ai-algorithms-streamline-data-processing-for-space-based-instruments New AI Algorithms Streamline Data Processing for Space-based Instruments] | |
| − | + | * [https://www.forbes.com/sites/forbestechcouncil/2021/08/11/unlocking-the-power-of-predictive-analytics-with-ai/ Unlocking The Power Of Predictive Analytics With AI - Forbes] | |
| − | + | * [https://blog.netsil.com/a-comparison-of-time-series-databases-and-netsils-use-of-druid-db805d471206 A Comparison of Time Series Databases and Netsil’s Use of Druid | Netsil] | |
| − | + | * [https://azure.microsoft.com/en-us/blog/microsoft-announces-the-general-availability-of-azure-time-series-insights/ Microsoft announces the general availability of Azure Time Series Insights | Ryan Waite - Microsoft] | |
| − | + | * [https://www.outlyer.com/blog/top10-open-source-time-series-databases/ Top 10 Time Series Databases | Outlyer] | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | <hr> | + | Time-based AI algorithms are algorithms that use time series data to make predictions or analyses. Time series data are data that are collected over time and have a temporal order. For example, the daily temperature, the stock prices, or the number of visitors to a website are all time series data. These algorithms can be used for a variety of purposes, such as forecasting future values, detecting trends and patterns, and making informed decisions based on historical data. They can be applied to many different fields, including finance, economics, meteorology, and healthcare. |
| + | |||
| + | |||
| + | <hr><center> | ||
| + | |||
| + | <b><i>Whenever we have developed better clocks, we’ve learned something new about the world. </i></b><br> - Alexander Smith [https://scitechdaily.com/new-time-dilation-phenomenon-revealed-timekeeping-theory-combines-quantum-clocks-and-einsteins-relativity/ New Time Dilation Phenomenon Revealed: Timekeeping Theory Combines Quantum Clocks and Einstein’s Relativity - Dartmouth College] | ||
| + | |||
| + | </center><hr> | ||
| + | |||
| + | == <span id="Common"></span>Common == | ||
| + | There are different types of sequence/time-based AI algorithms, depending on the goal and the method of the algorithm. Some of the most common ones are: | ||
| − | <b><i> | + | * Time Series Forecasting: |
| + | ** [[Forecasting#Time Series Forecasting - Statistical|Statistical]]: | ||
| + | *** <b>Autoregressive (AR)</b>: uses past values of the time series to predict future values. It assumes that the current value is a linear function of previous values. For example, AR can be used to forecast the weather based on historical data. | ||
| + | *** <b>Autoregressive Integrated Moving Average (ARIMA)</b>: is an extension of AR that also accounts for the trend and the seasonality of the time series. It uses differencing to make the time series stationary (i.e., having constant mean and variance) and then applies AR and moving average (MA) models. For example, ARIMA can be used to forecast the sales of a product based on past sales and seasonal patterns. | ||
| + | *** <b>Seasonal Autoregressive Integrated Moving Average (SARIMA)</b>: is a further extension of ARIMA that also accounts for the cyclic variations of the time series. It uses seasonal differencing and seasonal AR and MA models to capture the periodic fluctuations of the time series. For example, SARIMA can be used to forecast the electricity demand based on past demand and seasonal factors. | ||
| + | *** <b>Exponential Smoothing (ES)</b>: uses weighted averages of past values of the time series to predict future values. It gives more weight to recent values than older values, and it can also incorporate trend and seasonality components. For example, ES can be used to forecast the inventory level based on past demand and supply. | ||
| + | ** [[Forecasting#Time Series Forecasting - Deep Learning|Deep Learning]]: | ||
| + | *** <b>Prophet</b>: is a modern and flexible approach to time series forecasting developed by [[Meta|Facebook]]. It uses a decomposable model that consists of trend, seasonality, and holiday components, and it allows for adding custom effects and prior information. For example, Prophet can be used to forecast the web traffic for a data science blog website based on past traffic and special events. | ||
| + | *** <b>[[Neural Turing Machine]] (NTM)</b>: the fuzzy pattern matching capabilities of [[Neural Network]]s with the algorithmic power of programmable computers. NTMs are an instance of [[Memory]] Augmented [[Neural Network]]s, a new class of [[Recurrent Neural Network (RNN)]]s which decouple computation from [[memory]] by introducing an external [[memory]] unit. NTMs have demonstrated superior performance over Long Short-Term [[Memory]] Cells in several sequence learning tasks. | ||
| + | * [[Neural Network]]s: | ||
| + | ** <b>[[Recurrent Neural Network (RNN)]]</b>: is a type of [[Deep Learning]] model that can process sequential data such as time series. It uses a network of neurons that have feedback loops, which enable them to store information from previous inputs. For example, RNN can be used to forecast the prices of Bitcoin based on past prices and other factors. | ||
| + | *** <b>[[Gated Recurrent Unit (GRU)]]</b>: are a gating mechanism in [[Recurrent Neural Network (RNN)]] architecture. Like other RNNs, a GRU can process sequential data such as time series, natural language, and speech1. The GRU is similar to a [[Long Short-Term Memory (LSTM)]] with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. This means that GRUs are generally easier and faster to train than their LSTM counterparts. GRUs have been found to perform similarly to LSTMs on certain tasks such as polyphonic music modeling, speech signal modeling, and natural language processing. They have shown that gating is indeed helpful in general. | ||
| + | *** <b>[[Long Short-Term Memory (LSTM)]]</b>: is a special type of RNN that can handle long-term dependencies in sequential data. It uses a [[memory]] cell that can store, update, and forget information over time, and it has gates that control the flow of information in and out of the cell. For example, LSTM can be used to forecast the generation of wind power based on past generation and weather conditions: | ||
| + | **** <b>[[Bidirectional Long Short-Term Memory (BI-LSTM)]]</b>: is a type of [[Recurrent Neural Network (RNN)]] architecture that processes data in both forward and backward directions. It consists of two LSTMs: one taking the input in a forward direction, and the other in a backward direction. BI-LSTMs effectively increase the amount of information available to the network, improving the context available to the algorithm. For example, knowing what words immediately follow and precede a word in a sentence. Compared to LSTM, BI-LSTM combines the forward hidden layer and the backward hidden layer, which can access both the preceding and succeeding contexts¹. This feature of flow of data in both directions makes the BI-LSTM different from other LSTMs. BI-LSTMs have been successfully applied to various tasks such as natural language processing, speech recognition, and traffic forecasting. | ||
| + | **** <b>Bidirectional Long Short-Term Memory (BI-LSTM) with Attention Mechanism</b>: is a type of [[Recurrent Neural Network (RNN)]] architecture that processes data in both forward and backward directions, and uses an attention mechanism to weigh the importance of different parts of the input sequence. The attention mechanism allows the network to focus on specific parts of the input sequence when making predictions, rather than treating all parts of the sequence equally. This can be particularly useful when dealing with long input sequences, where some parts of the sequence may be more relevant to the prediction than others. BI-LSTMs with Attention Mechanism have been successfully applied to various tasks such as text classification, [[Sentiment Analysis]], and human activity recognition. | ||
| + | **** <b>[[Average-Stochastic Gradient Descent (SGD) Weight-Dropped LSTM (AWD-LSTM)]]</b>: is a variant of LSTM that employs DropConnect for regularization, as well as NT-ASGD for optimization. NT-ASGD stands for non-monotonically triggered averaged stochastic gradient descent, which returns an average of the last iterations of weights. AWD-LSTM has shown great results on both word-level and character-level models. It has been used in research papers on word-level models and has shown great results on character-level models as well. | ||
| + | *** <b>[[Sequence to Sequence (Seq2Seq)]]</b>: can map a variable-length input sequence to a variable-length output sequence. It is often used for natural language processing tasks, such as machine translation, text summarization, conversational models, and question answering. The Seq2Seq algorithm consists of two main components: an encoder and a decoder. The encoder reads the input sequence one timestep at a time and produces a hidden vector representation of the input. The decoder then uses the hidden vector as the initial state and generates the output sequence one timestep at a time, using the previous output as the input context. | ||
| + | ** <b>[[Transformer]]</b>: is a state-of-the-art [[Deep Learning]] model that can process sequential data such as time series. It uses layers of attention mechanisms that can learn how to focus on relevant parts of the input data, and it can handle long-term dependencies and parallel computations efficiently. For example, [[Transformer]] can be used to forecast the spread of COVID-19 based on past cases and interventions. [[Transformer]] can process sequential data using layers of attention mechanisms, without using recurrent or convolutional layers. It can handle long-term dependencies and parallel computations efficiently, and it can achieve better results than RNN-based Seq2Seq models on various tasks. | ||
| + | *** <b>[[Generative Pre-trained Transformer (GPT)]]</b>: are a family of language models that use [[Deep Learning]] techniques to generate natural language text. They are based on the [[transformer]] architecture and can be fine-tuned for various natural language processing tasks such as text generation, language translation, and text classification. The first GPT was introduced in 2018 by the American artificial intelligence (AI) company [[OpenAI]]. GPT models are artificial [[Neural Network]]s that are based on the [[transformer]] architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content | ||
| + | *** <b>[[Attention]] Mechanism</b>: allows the decoder to selectively focus on different parts of the input sequence when generating the output, instead of relying on a single fixed vector. This can improve the performance and accuracy of the Seq2Seq model, especially for long sequences | ||
| + | **** <b>[[Transformer-XL]]</b>: is a transformer-based language model that introduces the notion of recurrence to the deep self-attention network. It was designed to enable learning dependency beyond a fixed length without disrupting temporal coherence. The model consists of a segment-level recurrence mechanism and a novel positional encoding scheme. This method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla [[Transformer]]s, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla [[Transformer]]s during evaluation. | ||
| + | *** <b>Beam search</b>: is a technique to find the most probable output sequence given the input sequence, by keeping track of multiple candidate sequences and expanding them based on their probabilities. This can improve the quality and diversity of the output, compared to using a greedy or random search. | ||
| + | ** <b>Convolutional Neural Network (CNN)</b>: is another type of [[Deep Learning]] model that can process sequential data such as time series. It uses layers of filters that can extract features from local regions of the input data, and it can capture complex patterns and relationships in the data. For example, CNN can be used to forecast an avalanche in a famous ski resort based on past snowfall and temperature data. | ||
| + | ** <b>[[Spatial-Temporal Dynamic Network (STDN)]]</b>: a [[Deep Learning]] framework proposed to address the challenge of modeling complex spatial dependencies and temporal dynamics in traffic prediction. A flow gating mechanism is introduced to learn the dynamic similarity between locations, and a periodically shifted attention mechanism is designed to handle long-term periodic temporal shifting. This approach has been shown to be effective in predicting taxi demand | ||
| + | * Other: | ||
| + | ** <b>Gaussian Process (GP)</b>: is a type of probabilistic model that can handle uncertainty and noise in time series data. It uses a function that defines how similar any two points in the input space are, and it produces a distribution over possible outputs for any given input. For example, GP can be used to forecast the depletion level of stocks in stores based on past sales and inventory data. | ||
| + | ** <b>[[End-to-End Speech]]</b>: translation is an approach to speech translation that is gaining high interest from the research world in the last few years. It consists of using a single [[Deep Learning]] model that learns to generate translated text of the input audio in an end-to-end fashion. This approach, known as “end-to-end” or “direct” ST, supposes many advantages over the former, such as avoiding the concatenation of errors, the direct use of prosodic from speech and a lower inference time. | ||
| + | ** <b>[[(Tree) Recursive Neural (Tensor) Network (RNTN)]]</b>: type of [[Neural Network]] that is mostly used for natural language processing. It has a tree structure with a neural net at each node. The purpose of these nets is to analyze data that have a hierarchy of structure. An RNTN is a powerful tool for deciphering and labeling patterns. Structurally, an RNTN is a binary tree with three nodes: a root and two leaves. The root and leaf nodes are not neurons, but instead, they are groups of neurons – the more complicated the input data, the more neurons are required. RNTNs have been successfully applied to [[Sentiment Analysis]], where the input is a sentence in its parse tree structure, and the output is the classification for the input sentence, i.e., whether the meaning is very negative, negative, neutral, positive, or very positive | ||
| + | ** <b>[[Temporal Difference (TD) Learning]]</b>: refers to a class of model-free [[Reinforcement Learning (RL)]] methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known. | ||
| − | + | <hr> | |
| Line 61: | Line 104: | ||
<hr> | <hr> | ||
| − | [https://www.darpa.mil/news-events/2019-08-20 DARPA Making Progress on Miniaturized Atomic Clocks for Future PNT Applications | ][[Defense#US Defense Advanced Research Projects Agency (DARPA)|US Defense Advanced Research Projects Agency (DARPA)]] | + | = What Time Is It? = |
| + | * [https://www.darpa.mil/news-events/2019-08-20 DARPA Making Progress on Miniaturized Atomic Clocks for Future PNT Applications | ][[Defense#US Defense Advanced Research Projects Agency (DARPA)|US Defense Advanced Research Projects Agency (DARPA)]] | ||
<img src="https://www.darpa.mil/DDM_Gallery/nist-csac-619-316.jpg" width="1000"> | <img src="https://www.darpa.mil/DDM_Gallery/nist-csac-619-316.jpg" width="1000"> | ||
| Line 69: | Line 113: | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
|| | || | ||
| − | <youtube> | + | <youtube>hzLTgtFaPLY</youtube> |
| − | <b> | + | <b>Atomic Clocks Are Reinventing Time |
| − | </b><br> | + | </b><br>Though humans don't experience it in their daily lives, gravity and movement can change how time elapses. Ultra-precise atomic clocks are now able to measure these tiny changes, known as time dilation. It's a technological advance that could revolutionize our understanding of time. |
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
| Line 167: | Line 211: | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| + | |||
| + | |||
| + | <hr> | ||
| + | |||
| + | <b><i> | ||
| + | The Earth's rotation is so accurate it varies only in milliseconds ...do you feel the Earth rotation slowing down? | ||
| + | </i></b> | ||
| + | |||
| + | <hr> | ||
| + | |||
| + | |||
| + | <br> | ||
| + | |||
| + | == <span id="Light Clock 1905 - Einstein's Thought Experiment"></span>Light Clock 1905 - Einstein's Thought Experiment == | ||
| + | |||
| + | Imagine you have a special clock that works with light. This clock has two mirrors facing each other, and a beam of light bounces up and down between them. Every time the light goes from the bottom mirror to the top and back down, it counts as one tick of the clock. Einstein's light clock thought experiment shows that when things move fast, time slows down for them. This surprising idea helps us understand the nature of time and motion in our universe. Now, let's think about this clock in two different situations. | ||
| + | |||
| + | |||
| + | <b>Situation 1: Standing Still: </b>First, picture the clock sitting on a table, not moving at all. The light goes straight up to the top mirror and straight back down to the bottom mirror. If you measured the time it takes for the light to do this, you would see it takes a certain amount of time for one tick. | ||
| + | |||
| + | <b>Situation 2: Moving Clock: </b>Now, imagine you place the clock on a skateboard and push it so it's moving. As the clock moves, the light beam has to travel a different path. Instead of going straight up and down, it now has to go in a diagonal path because the mirrors are moving while the light is traveling. It's like when you throw a ball to a friend while running; the ball has to cover more distance because both of you are moving. | ||
| + | |||
| + | |||
| + | <i>What This Means</i> ... Because the light in the moving clock has to travel a longer, diagonal path, it takes more time for one tick to happen compared to when the clock is standing still. This means that for someone watching the moving clock, time appears to run slower for the moving clock compared to a clock that's not moving. This idea is called time dilation. It means that time actually passes at different rates depending on how fast something is moving. If you were riding on the skateboard with the clock, you wouldn't notice anything different about the clock's ticks. But someone standing still and watching you would see that your clock ticks more slowly. | ||
| + | |||
| + | |||
| + | </i>Why It Matters</i> ... This thought experiment helps us understand that time isn't the same everywhere and can be different depending on how fast things are moving. This concept is a key part of Einstein's theory of special relativity, which helps scientists understand how the universe works, especially when things are moving very fast, like spaceships or particles in a collider. | ||
| + | |||
| + | <youtube>b2Vd9HGB5XQ</youtube> | ||
= <span id="Precision Time Protocol (PTP)"></span>Precision Time Protocol (PTP) = | = <span id="Precision Time Protocol (PTP)"></span>Precision Time Protocol (PTP) = | ||
| Line 204: | Line 277: | ||
[https://www.google.com/search?q=Navigation+positioning+Aid+radar+waves+artificial+intelligence+ai ...Google search] | [https://www.google.com/search?q=Navigation+positioning+Aid+radar+waves+artificial+intelligence+ai ...Google search] | ||
| + | * [[Time]] ... [[Time#Positioning, Navigation and Timing (PNT)|PNT]] ... [[Time#Global Positioning System (GPS)|GPS]] ... [[Causation vs. Correlation#Retrocausality| Retrocausality]] ... [[Quantum#Delayed Choice Quantum Eraser|Delayed Choice Quantum Eraser]] ... [[Quantum]] | ||
* [[Case Studies]] | * [[Case Studies]] | ||
** [[Smart Cities]] | ** [[Smart Cities]] | ||
| Line 262: | Line 336: | ||
[https://www.google.com/search?q=GPS+Global+Positioning+GNSS+clock+time+keeping+artificial+intelligence+ai ...Google search] | [https://www.google.com/search?q=GPS+Global+Positioning+GNSS+clock+time+keeping+artificial+intelligence+ai ...Google search] | ||
| − | * [[ | + | * [[Time]] ... [[Time#Positioning, Navigation and Timing (PNT)|PNT]] ... [[Time#Global Positioning System (GPS)|GPS]] ... [[Causation vs. Correlation#Retrocausality| Retrocausality]] ... [[Quantum#Delayed Choice Quantum Eraser|Delayed Choice Quantum Eraser]] ... [[Quantum]] |
| + | * [[Astronomy]] | ||
* GPS has been copied by [[Government Services#Russia|Russia's]] [https://en.wikipedia.org/wiki/GLONASS GLONASS], Europe’s [https://en.wikipedia.org/wiki/Galileo_(satellite_navigation) Galileo], [[Government Services#China|China's]] [https://en.wikipedia.org/wiki/BeiDou BeiDou], India’s IRNSS, and Japan’s [https://en.wikipedia.org/wiki/Quasi-Zenith_Satellite_System QZSS] | * GPS has been copied by [[Government Services#Russia|Russia's]] [https://en.wikipedia.org/wiki/GLONASS GLONASS], Europe’s [https://en.wikipedia.org/wiki/Galileo_(satellite_navigation) Galileo], [[Government Services#China|China's]] [https://en.wikipedia.org/wiki/BeiDou BeiDou], India’s IRNSS, and Japan’s [https://en.wikipedia.org/wiki/Quasi-Zenith_Satellite_System QZSS] | ||
* [https://ieeexplore.ieee.org/document/5608862 Artificial intelligence in GPS navigation systems | Jeffrey L. Duffany] | * [https://ieeexplore.ieee.org/document/5608862 Artificial intelligence in GPS navigation systems | Jeffrey L. Duffany] | ||
| Line 291: | Line 366: | ||
<youtube>aDWbpRXblMk</youtube> | <youtube>aDWbpRXblMk</youtube> | ||
<b>Satellite Navigation Systems Overview with John Pottle | <b>Satellite Navigation Systems Overview with John Pottle | ||
| − | </b><br>Royal Institute of Navigation. John Pottle, Director of the Royal Institute of Navigation, will put into context what the hundreds of navigation satellites in space are all for and how they work together. This webinar will explain the similarities and differences between global and regional satellite navigation systems, how they are co-ordinated, and by whom. The space-based augmentation systems will also be covered: what are these and how do they help? *During the webinar Q&A there was a question about whether or not GNSS could be used for moon missions - please see: Website: https://rin.org.uk/ | + | </b><br>Royal Institute of Navigation. John Pottle, Director of the Royal Institute of Navigation, will put into [[context]] what the hundreds of navigation satellites in space are all for and how they work together. This webinar will explain the similarities and differences between global and regional satellite navigation systems, how they are co-ordinated, and by whom. The space-based augmentation systems will also be covered: what are these and how do they help? *During the webinar Q&A there was a question about whether or not GNSS could be used for moon missions - please see: Website: https://rin.org.uk/ |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| Line 323: | Line 398: | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
|| | || | ||
| − | <youtube> | + | <youtube>0k2QdX6yZiw</youtube> |
| − | <b> | + | <b>Brian Cox Just Announced Mind-Bending Theory Of Time |
| − | </b><br> | + | </b><br>Everything in our universe seems perfect. There are laws governing the entire universe, but certain mysteries have remained unsolved despite decades of research. Why does time travel in one direction? What is the nature of reality? Why does Gravity exist? Why does time slow down when we travel at the speed of light? These are questions that have fascinated us for millennia. |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| Line 340: | Line 415: | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
|| | || | ||
| − | <youtube> | + | <youtube>8eOEhphQz6k</youtube> |
| − | <b> | + | <b>Using AI to get city and weather from GPS |
| − | </b><br> | + | </b><br>A quick demo of how Noodl AI and the Function Co-pilot node can be used to call different API's from a simple text prompts to gather location and weather data from location coordinates. |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| Line 501: | Line 576: | ||
<youtube>JKY03NV3C2s</youtube> | <youtube>JKY03NV3C2s</youtube> | ||
<b>PULP-DroNet -- Autonomous Artificial Intelligence-powered Nano-Drone | <b>PULP-DroNet -- Autonomous Artificial Intelligence-powered Nano-Drone | ||
| − | </b><br>PULP-DroNet is a | + | </b><br>PULP-DroNet is a [[Deep Learning]]-powered visual navigation engine that enables autonomous navigation of a pocket-size quadrotor in a previously unseen environment. |
| − | Thanks to PULP-DroNet the nano-drone can explore the environment, avoiding collisions also with dynamic obstacles, in complete autonomy -- no human operator, no ad-hoc external signals, and no remote laptop! This means that all the complex computations are done directly aboard the vehicle and very fast. The visual navigation engine is composed of both a software and a hardware part. The former is based on the previous DroNet [1] project developed by the RPG [2] from the University of Zürich (UZH). DroNet is a shallow convolutional neural network (CNN) which has been used to control a standard-size quadrotor in a set of environments via remote computation. The hardware soul of PULP-DroNet is embodied by the PULP-Shield an ultra-low power visual navigation module featuring a Parallel Ultra-Low-Power (PULP) GAP8 System-on-Chip (SoC) from GreenWaves Technologies [3], an ultra-low power camera, and off-chip Flash/DRAM memory; the shield is designed as a pluggable PCB for the Crazyflie 2.0 [4] nano-drone. Then, we developed a general methodology for deploying state-of-the-art | + | Thanks to PULP-DroNet the nano-drone can explore the environment, avoiding collisions also with dynamic obstacles, in complete autonomy -- no human operator, no ad-hoc external signals, and no remote laptop! This means that all the complex computations are done directly aboard the vehicle and very fast. The visual navigation engine is composed of both a software and a hardware part. The former is based on the previous DroNet [1] project developed by the RPG [2] from the University of Zürich (UZH). DroNet is a shallow convolutional neural network (CNN) which has been used to control a standard-size quadrotor in a set of environments via remote computation. The hardware soul of PULP-DroNet is embodied by the PULP-Shield an ultra-low power visual navigation module featuring a Parallel Ultra-Low-Power (PULP) GAP8 System-on-Chip (SoC) from GreenWaves Technologies [3], an ultra-low power camera, and off-chip Flash/DRAM [[memory]]; the shield is designed as a pluggable PCB for the Crazyflie 2.0 [4] nano-drone. Then, we developed a general methodology for deploying state-of-the-art [[Deep Learning]] algorithms on top of ultra-low power embedded computation nodes, like a miniaturized drone. Our novel methodology allowed us first to deploy DroNet on the PULP-Shield, and then demonstrating how it enables the execution the CNN on board the CrazyFlie 2.0 within only 64-284mW and with a throughput of 6-18 frame-per-second! Finally, we field-prove our methodology presenting a closed-loop fully working demonstration of vision-driven autonomous navigation relying only on onboard resources, and within an ultra-low power budget. We release here, as open source, all our code, hardware designs, datasets, and trained networks. Reference: D. Palossi, F. Conti, and L. Benini An Open Source and Open Hardware [[Deep Learning]]-powered Visual Navigation Engine for Autonomous Nano-UAVs Preprint: https://arxiv.org/abs/1905.04166 PULP-Platform Project Webpage: https://www.pulp-platform.org/ |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| Line 629: | Line 704: | ||
{|<!-- T --> | {|<!-- T --> | ||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| valign="top" | | | valign="top" | | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
| Line 675: | Line 742: | ||
|}<!-- B --> | |}<!-- B --> | ||
| + | |||
| + | == <span id="Spacetime">Spacetime</span> == | ||
| + | [https://www.youtube.com/results?search_query=Spacetime YouTube search...] | ||
| + | [https://www.google.com/search?q=Spacetime ...Google search] | ||
| + | * [[Life~Meaning]] ... [[Consciousness]] ... [[Loop#Feedback Loop - Creating Consciousness|Creating Consciousness]] ... [[Quantum#Quantum Biology|Quantum Biology]] ... [[Orch-OR]] ... [[TAME]] ... [[Protein Folding & Discovery|Proteins]] | ||
| + | |||
| + | '''Spacetime''' is a [[mathematical model]] that fuses the three dimensions of [[space]] (length, width, height) and the single dimension of [[time]] into a single four-dimensional continuum. | ||
| + | |||
| + | Before the 20th century, scientists viewed space and time as completely separate entities. Space was a static stage where events occurred, and time was a constant, universal clock. [[Albert Einstein]] and mathematician [[Hermann Minkowski]] revolutionized this view by proving they are inextricably linked. | ||
| + | |||
| + | === Spacetime Core Concept: The Fabric of the Universe === | ||
| + | In physics, every occurrence is an "event" located at a specific point in spacetime, described by four coordinates: | ||
| + | :<math>(x, y, z, t)</math> | ||
| + | |||
| + | Imagine meeting a friend. To ensure you meet, you must provide a specific location (spatial coordinates) and a specific time (temporal coordinate). If you omit the time, the meeting cannot happen. | ||
| + | |||
| + | === General Relativity: Gravity as Curvature === | ||
| + | The most famous application of spacetime is in Einstein’s [[General theory of relativity|General Theory of Relativity]]. | ||
| + | |||
| + | * '''Newton's View:''' Gravity is an invisible force that pulls objects together. | ||
| + | * '''Einstein's View:''' Gravity is not a force; it is the '''curvature of spacetime'''. | ||
| + | |||
| + | Massive objects (like the [[Earth]] or [[Sun]]) warp the fabric of spacetime around them. Smaller objects (like the Moon or a satellite) do not "feel" a force pulling them; they are simply following the straightest possible path (a [[geodesic]]) along this curved surface. | ||
| + | |||
| + | === Visualizing Spacetime: The Light Cone === | ||
| + | Physicists often use '''Minkowski diagrams''' to visualize spacetime. In these graphs, time is usually plotted on the vertical axis and space on the horizontal axis. | ||
| + | |||
| + | * '''World Line:''' A line representing an object's path through time. Even if an object is stationary in space, it moves through time, creating a straight vertical world line. | ||
| + | * '''Light Cone:''' Since nothing can travel faster than [[Speed of light|light]], light spreading out from a single event forms a "cone" shape in the diagram. Events inside this cone can affect one another (causality); events outside are effectively disconnected. | ||
| + | |||
| + | === Key Implications === | ||
| + | * '''[[Time dilation]]:''' Because space and time are linked, moving through space affects movement through time. The faster an object travels through space, the slower it moves through time relative to a stationary observer. | ||
| + | * '''No Universal "Now":''' There is no single clock for the universe. Two observers moving at different speeds or in different gravitational fields will disagree on when an event happened ([[Relativity of simultaneity]]). | ||
| + | |||
| + | === Comparison Table === | ||
| + | {| class="wikitable" | ||
| + | ! Concept | ||
| + | ! Classical View (Newton) | ||
| + | ! Spacetime View (Einstein) | ||
| + | |- | ||
| + | | '''Space''' | ||
| + | | A static, rigid stage. | ||
| + | | A flexible fabric that can bend and twist. | ||
| + | |- | ||
| + | | '''Time''' | ||
| + | | Universal and constant for everyone. | ||
| + | | Relative; flows at different rates for different observers. | ||
| + | |- | ||
| + | | '''Gravity''' | ||
| + | | A force acting at a distance. | ||
| + | | The curvature of the geometry of spacetime. | ||
| + | |} | ||
| + | |||
| + | |||
| + | {|<!-- T --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>YpyXVkqkQgg</youtube> | ||
| + | <b>Time Does Not Exist. Let me explain with a graph. | ||
| + | </b><br>How do we really move through spacetime? Sadly the books have sold out. | ||
| + | |} | ||
| + | |<!-- M --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>3khY_bwf5FY</youtube> | ||
| + | <b>What Exactly is Spacetime? Explained in Ridiculously Simple Words | ||
| + | </b><br>Spacetime, as a concept, is related to a space that consists of 4 dimensions instead of the regular 3-dimensional space. As early as 1905, Einstein proposed a now widely popular theory that the speed of light is independent of the motion of all observers, and that space and time are interconnected in a single continuum. This theory, which is now a cornerstone of modern and quantum physics, is known as Einstein’s special theory of relativity. Einstein's proposed idea of a single continuum where space and time are interwoven is what people call “space-time”. | ||
| + | According to this theory, time—which has traditionally been considered an independent entity according to the principles of classical physics—is affected when a body moves through space. This happens because, according to the theory, time and space are connected and part of a single continuum—spacetime. | ||
| + | In this video, we discuss spacetime in absolutely simple words: what exactly is spacetime and how is it related to the force of gravitation and Einstein’s theory of relativity? | ||
| + | |} | ||
| + | |}<!-- B --> | ||
| + | {|<!-- T --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>GZcXNBYcfe4</youtube> | ||
| + | <b>This Is Why Time Might Not Actually Exist | ||
| + | </b><br>Quantum Entanglement May Reveal a Reality We Can't Handle | ||
| + | |} | ||
| + | |<!-- M --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>yPVQtvbiS4Y</youtube> | ||
| + | <b>What Actually Are Space And Time? | ||
| + | </b><br>If you like this video, check out writer Geraint Lewis´ excellent book, co-written with Chris Ferrie: | ||
| + | [https://www.amazon.com/Where-Universe-Other-Cosmic-Questions/dp/1728238811 Where Did the Universe Come From? And Other Cosmic Questions: Our Universe, from the Quantum to the Cosmos] AND check out his [https://www.youtube.com/c/AlasLewisAndBarnes Youtube channel] | ||
| + | |} | ||
| + | |}<!-- B --> | ||
== <span id="Longitude"></span>Longitude == | == <span id="Longitude"></span>Longitude == | ||
| Line 705: | Line 863: | ||
[https://www.google.com/search?q=Animal+Navigation+Incredible+Bees+Journeys+intelligence ...Google search] | [https://www.google.com/search?q=Animal+Navigation+Incredible+Bees+Journeys+intelligence ...Google search] | ||
| + | * [[Life~Meaning]] ... [[Consciousness]] ... [[Loop#Feedback Loop - Creating Consciousness|Creating Consciousness]] ... [[Quantum#Quantum Biology|Quantum Biology]] ... [[Orch-OR]] ... [[TAME]] ... [[Protein Folding & Discovery|Proteins]] | ||
| + | * [[Collective Animal Intelligence]] ... [[Animal Ecology]] ... [[Animal Language]] ... [[Bird Identification]] | ||
* [https://www.amazon.com/Incredible-Journeys-Exploring-Wonders-Navigation/dp/1473656826 Incredible Journeys: Exploring the Wonders of Animal Navigation Hardcover | David Barrie] | * [https://www.amazon.com/Incredible-Journeys-Exploring-Wonders-Navigation/dp/1473656826 Incredible Journeys: Exploring the Wonders of Animal Navigation Hardcover | David Barrie] | ||
| Line 725: | Line 885: | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
| + | |||
| + | == <span id="Molecular Clock"></span>Molecular Clock == | ||
| + | Scientists use the ''molecular clock'', which assumes steady genetic changes, to estimate species divergence times, but recent models like the ''Covariant Evolutionary Tempo (CET)'' by Budd & Mann suggest evolution isn't always steady, predicting rapid bursts in major groups (like mammals or birds) early on, explaining mismatches with fossil records by showing faster initial evolution and diversification, thus refining our understanding of how large animal groups rapidly emerge. | ||
| + | |||
| + | ''' How the Molecular Clock Works ''' | ||
| + | * '''Rate of Mutation:''' The core idea is that mutations in DNA accumulate at a relatively constant rate over time. | ||
| + | * '''Genetic Differences:''' By comparing DNA or protein sequences between species, scientists count the genetic differences. | ||
| + | * '''Dating Divergence:''' More differences imply a longer time since the species shared a common ancestor, allowing estimation of evolutionary timelines. | ||
| + | |||
| + | ''' Challenges & New Models (Budd & Mann's CET) ''' | ||
| + | the Covariant Evolutionary Tempo model suggests that when a big group of organisms appear, evolution actually speeds up. This would make it appear like more time was passing when evolution was really on fast-forward, differentiating into various groups that eventually appeared in the fossil record. “While the speeding clock idea needs testing,” Telford wrote, “it could explain other mismatches between molecular clocks and the fossil record.” | ||
| + | |||
| + | * '''Fossil Record Mismatch:''' Molecular clocks sometimes suggest earlier origins for animal groups than the fossil record shows, creating a gap (e.g., the [[Cambrian explosion]]). | ||
| + | * '''The CET Model:''' This model proposes that when a major group starts to diversify, it experiences: | ||
| + | ** '''Explosive Radiation:''' Rapid increases in species diversity. | ||
| + | ** '''Elevated Molecular Rates:''' Faster rates of genetic change. | ||
| + | ** '''Impact:''' This explains why fossil records show sudden appearances of major groups, as they truly did evolve and diversify quickly, rather than gradually over long periods, says this article from Uppsala University. | ||
| + | |||
| + | ''' Significance ''' | ||
| + | * '''Refined Timelines:''' The new models provide a more nuanced understanding of evolutionary history, reconciling molecular data with fossil evidence. | ||
| + | * '''Understanding Major Events:''' Helps explain rapid evolutionary events, like the emergence of mammals after dinosaur extinction, where a surviving lineage exploded in diversity. | ||
| + | * '''Connecting Disciplines:''' Bridges gaps between molecular biology, paleontology, and geology to build more accurate evolutionary trees. | ||
| + | |||
| + | <youtube>mzKXfz-QPF0</youtube> | ||
| + | <youtube>JbtfyRUxXB0</youtube> | ||
| + | |||
| + | = <span id="Time Travel in Fiction"></span>Time Travel in Fiction = | ||
| + | * [[Causation vs. Correlation]] | ||
| + | * [[Books, Radio & Movies - Exploring Possibilities]] | ||
| + | Time travel in science and fiction serves as a rich narrative tool to explore [[Causation vs. Correlation | causality]], free will, and the malleability of timelines. Scientific theories like wormholes, relativistic travel involving time dilation, and closed timelike curves provide plausible mechanisms explanation of how time travel functions. In different popular movies, books, & shows – not how it works “under the hood", but how it [[Causation vs. Correlation | causally]] affects the perspective of characters’ timelines (who has free will? can you change things by going back to the past or forwards into the future?). In particular, I explain Ender's Game, Planet of the Apes, Harry Potter and the Prisoner of Azkaban, Primer, Bill & Ted’s Excellent Adventure, Back to the Future, Groundhog Day, Looper, the video game “Braid”, and Lifeline. Whether driven by science or narrative needs, these portrayals reflect how characters experience and manipulate time, raising questions about fate, agency, and the consequences of tampering with the past | ||
| + | |||
| + | <youtube>d3zTfXvYZ9s</youtube> | ||
| + | <youtube>mlMFYs0-XvU</youtube> | ||
= <span id="Time & Music"></span>Time & Music = | = <span id="Time & Music"></span>Time & Music = | ||
| Line 730: | Line 923: | ||
[https://www.google.com/search?q=time+music+artificial+intelligence+ai ...Google search] | [https://www.google.com/search?q=time+music+artificial+intelligence+ai ...Google search] | ||
| − | * [[ | + | * [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] ... [[Music]] |
<youtube>gk17N6cDKqQ</youtube> | <youtube>gk17N6cDKqQ</youtube> | ||
Latest revision as of 11:37, 3 April 2026
YouTube ... Quora ...Google search ...Google News ...Bing News
- Time ... PNT ... GPS ... Retrocausality ... Delayed Choice Quantum Eraser ... Quantum
- Government Services:
- National Institute of Standards and Technology (NIST) ... Time and Frequency Division, Physical Measurement Laboratory
- U.S. Department of Homeland Security (DHS) ... Science and Technology (S&T) Positioning, Navigation, and Timing (PNT) Program
- Defense ... Precise Time Department ... U.S. Naval Observatory has maintained a Time Service Department since 1880
- Perspective ... Context ... In-Context Learning (ICL) ... Transfer Learning ... Out-of-Distribution (OOD) Generalization
- National Timing Centre ... Assured Time and Frequency for the UK
- Time ...Coordinated Universal Time UTC ... Clock ...Timekeeping | Wikipedia
- The Very Long and Fascinating History of Clocks | Christopher McFadden - Interesting Engineering
- What Is a Leap Second? | Konstantin Bikos and Anne Buckle - timeanddate.com
- Atomic clocks ...Tide Clock | Amazon
- Clock synchronization
- Time: Do the past, present, and future exist all at once? | BigThink (video) ... astrophysicist Michelle Thaller, science educator Bill Nye, author James Gleick, and neuroscientist Dean Buonomano discuss how the human brain perceives of the passage of time, the idea in theoretical physics of time as a fourth dimension, and the theory that space and time are interwoven.
- Cybersecurity
- Crown Sterling ... changing the face of digital security with its non-integer-based algorithms that leverage time, AI and irrational numbers.
- Quantum cryptography ... the infosec industry looks to quantum cryptography and quantum key distribution (QKD)
- What’s a Time Crystal? | Charles Q. Choi - IEEE Spectrum ... And how do Google researchers use quantum computers to make them? ... quantum system of many particles that organize themselves into a periodic pattern of motion—periodic in time rather than in space—that persists in perpetuity.
- This Mirror Reverses How Light Travels in Time There are already applications in wireless, radar, and optical-computing | Charles Q. Choi - IEEE Spectrum ... There are already applications in wireless, radar, and optical-computing ... These applications often reverse the order of signals to help process them.
Sequence/Time-based Algorithms
- 10 Incredibly Useful Time Series Forecasting Algorithms
- Artificial intelligence (AI) algorithms: a complete overview
- New AI Algorithms Streamline Data Processing for Space-based Instruments
- Unlocking The Power Of Predictive Analytics With AI - Forbes
- A Comparison of Time Series Databases and Netsil’s Use of Druid | Netsil
- Microsoft announces the general availability of Azure Time Series Insights | Ryan Waite - Microsoft
- Top 10 Time Series Databases | Outlyer
Time-based AI algorithms are algorithms that use time series data to make predictions or analyses. Time series data are data that are collected over time and have a temporal order. For example, the daily temperature, the stock prices, or the number of visitors to a website are all time series data. These algorithms can be used for a variety of purposes, such as forecasting future values, detecting trends and patterns, and making informed decisions based on historical data. They can be applied to many different fields, including finance, economics, meteorology, and healthcare.
Whenever we have developed better clocks, we’ve learned something new about the world.
- Alexander Smith New Time Dilation Phenomenon Revealed: Timekeeping Theory Combines Quantum Clocks and Einstein’s Relativity - Dartmouth College
Common
There are different types of sequence/time-based AI algorithms, depending on the goal and the method of the algorithm. Some of the most common ones are:
- Time Series Forecasting:
- Statistical:
- Autoregressive (AR): uses past values of the time series to predict future values. It assumes that the current value is a linear function of previous values. For example, AR can be used to forecast the weather based on historical data.
- Autoregressive Integrated Moving Average (ARIMA): is an extension of AR that also accounts for the trend and the seasonality of the time series. It uses differencing to make the time series stationary (i.e., having constant mean and variance) and then applies AR and moving average (MA) models. For example, ARIMA can be used to forecast the sales of a product based on past sales and seasonal patterns.
- Seasonal Autoregressive Integrated Moving Average (SARIMA): is a further extension of ARIMA that also accounts for the cyclic variations of the time series. It uses seasonal differencing and seasonal AR and MA models to capture the periodic fluctuations of the time series. For example, SARIMA can be used to forecast the electricity demand based on past demand and seasonal factors.
- Exponential Smoothing (ES): uses weighted averages of past values of the time series to predict future values. It gives more weight to recent values than older values, and it can also incorporate trend and seasonality components. For example, ES can be used to forecast the inventory level based on past demand and supply.
- Deep Learning:
- Prophet: is a modern and flexible approach to time series forecasting developed by Facebook. It uses a decomposable model that consists of trend, seasonality, and holiday components, and it allows for adding custom effects and prior information. For example, Prophet can be used to forecast the web traffic for a data science blog website based on past traffic and special events.
- Neural Turing Machine (NTM): the fuzzy pattern matching capabilities of Neural Networks with the algorithmic power of programmable computers. NTMs are an instance of Memory Augmented Neural Networks, a new class of Recurrent Neural Network (RNN)s which decouple computation from memory by introducing an external memory unit. NTMs have demonstrated superior performance over Long Short-Term Memory Cells in several sequence learning tasks.
- Statistical:
- Neural Networks:
- Recurrent Neural Network (RNN): is a type of Deep Learning model that can process sequential data such as time series. It uses a network of neurons that have feedback loops, which enable them to store information from previous inputs. For example, RNN can be used to forecast the prices of Bitcoin based on past prices and other factors.
- Gated Recurrent Unit (GRU): are a gating mechanism in Recurrent Neural Network (RNN) architecture. Like other RNNs, a GRU can process sequential data such as time series, natural language, and speech1. The GRU is similar to a Long Short-Term Memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. This means that GRUs are generally easier and faster to train than their LSTM counterparts. GRUs have been found to perform similarly to LSTMs on certain tasks such as polyphonic music modeling, speech signal modeling, and natural language processing. They have shown that gating is indeed helpful in general.
- Long Short-Term Memory (LSTM): is a special type of RNN that can handle long-term dependencies in sequential data. It uses a memory cell that can store, update, and forget information over time, and it has gates that control the flow of information in and out of the cell. For example, LSTM can be used to forecast the generation of wind power based on past generation and weather conditions:
- Bidirectional Long Short-Term Memory (BI-LSTM): is a type of Recurrent Neural Network (RNN) architecture that processes data in both forward and backward directions. It consists of two LSTMs: one taking the input in a forward direction, and the other in a backward direction. BI-LSTMs effectively increase the amount of information available to the network, improving the context available to the algorithm. For example, knowing what words immediately follow and precede a word in a sentence. Compared to LSTM, BI-LSTM combines the forward hidden layer and the backward hidden layer, which can access both the preceding and succeeding contexts¹. This feature of flow of data in both directions makes the BI-LSTM different from other LSTMs. BI-LSTMs have been successfully applied to various tasks such as natural language processing, speech recognition, and traffic forecasting.
- Bidirectional Long Short-Term Memory (BI-LSTM) with Attention Mechanism: is a type of Recurrent Neural Network (RNN) architecture that processes data in both forward and backward directions, and uses an attention mechanism to weigh the importance of different parts of the input sequence. The attention mechanism allows the network to focus on specific parts of the input sequence when making predictions, rather than treating all parts of the sequence equally. This can be particularly useful when dealing with long input sequences, where some parts of the sequence may be more relevant to the prediction than others. BI-LSTMs with Attention Mechanism have been successfully applied to various tasks such as text classification, Sentiment Analysis, and human activity recognition.
- Average-Stochastic Gradient Descent (SGD) Weight-Dropped LSTM (AWD-LSTM): is a variant of LSTM that employs DropConnect for regularization, as well as NT-ASGD for optimization. NT-ASGD stands for non-monotonically triggered averaged stochastic gradient descent, which returns an average of the last iterations of weights. AWD-LSTM has shown great results on both word-level and character-level models. It has been used in research papers on word-level models and has shown great results on character-level models as well.
- Sequence to Sequence (Seq2Seq): can map a variable-length input sequence to a variable-length output sequence. It is often used for natural language processing tasks, such as machine translation, text summarization, conversational models, and question answering. The Seq2Seq algorithm consists of two main components: an encoder and a decoder. The encoder reads the input sequence one timestep at a time and produces a hidden vector representation of the input. The decoder then uses the hidden vector as the initial state and generates the output sequence one timestep at a time, using the previous output as the input context.
- Transformer: is a state-of-the-art Deep Learning model that can process sequential data such as time series. It uses layers of attention mechanisms that can learn how to focus on relevant parts of the input data, and it can handle long-term dependencies and parallel computations efficiently. For example, Transformer can be used to forecast the spread of COVID-19 based on past cases and interventions. Transformer can process sequential data using layers of attention mechanisms, without using recurrent or convolutional layers. It can handle long-term dependencies and parallel computations efficiently, and it can achieve better results than RNN-based Seq2Seq models on various tasks.
- Generative Pre-trained Transformer (GPT): are a family of language models that use Deep Learning techniques to generate natural language text. They are based on the transformer architecture and can be fine-tuned for various natural language processing tasks such as text generation, language translation, and text classification. The first GPT was introduced in 2018 by the American artificial intelligence (AI) company OpenAI. GPT models are artificial Neural Networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content
- Attention Mechanism: allows the decoder to selectively focus on different parts of the input sequence when generating the output, instead of relying on a single fixed vector. This can improve the performance and accuracy of the Seq2Seq model, especially for long sequences
- Transformer-XL: is a transformer-based language model that introduces the notion of recurrence to the deep self-attention network. It was designed to enable learning dependency beyond a fixed length without disrupting temporal coherence. The model consists of a segment-level recurrence mechanism and a novel positional encoding scheme. This method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation.
- Beam search: is a technique to find the most probable output sequence given the input sequence, by keeping track of multiple candidate sequences and expanding them based on their probabilities. This can improve the quality and diversity of the output, compared to using a greedy or random search.
- Convolutional Neural Network (CNN): is another type of Deep Learning model that can process sequential data such as time series. It uses layers of filters that can extract features from local regions of the input data, and it can capture complex patterns and relationships in the data. For example, CNN can be used to forecast an avalanche in a famous ski resort based on past snowfall and temperature data.
- Spatial-Temporal Dynamic Network (STDN): a Deep Learning framework proposed to address the challenge of modeling complex spatial dependencies and temporal dynamics in traffic prediction. A flow gating mechanism is introduced to learn the dynamic similarity between locations, and a periodically shifted attention mechanism is designed to handle long-term periodic temporal shifting. This approach has been shown to be effective in predicting taxi demand
- Recurrent Neural Network (RNN): is a type of Deep Learning model that can process sequential data such as time series. It uses a network of neurons that have feedback loops, which enable them to store information from previous inputs. For example, RNN can be used to forecast the prices of Bitcoin based on past prices and other factors.
- Other:
- Gaussian Process (GP): is a type of probabilistic model that can handle uncertainty and noise in time series data. It uses a function that defines how similar any two points in the input space are, and it produces a distribution over possible outputs for any given input. For example, GP can be used to forecast the depletion level of stocks in stores based on past sales and inventory data.
- End-to-End Speech: translation is an approach to speech translation that is gaining high interest from the research world in the last few years. It consists of using a single Deep Learning model that learns to generate translated text of the input audio in an end-to-end fashion. This approach, known as “end-to-end” or “direct” ST, supposes many advantages over the former, such as avoiding the concatenation of errors, the direct use of prosodic from speech and a lower inference time.
- (Tree) Recursive Neural (Tensor) Network (RNTN): type of Neural Network that is mostly used for natural language processing. It has a tree structure with a neural net at each node. The purpose of these nets is to analyze data that have a hierarchy of structure. An RNTN is a powerful tool for deciphering and labeling patterns. Structurally, an RNTN is a binary tree with three nodes: a root and two leaves. The root and leaf nodes are not neurons, but instead, they are groups of neurons – the more complicated the input data, the more neurons are required. RNTNs have been successfully applied to Sentiment Analysis, where the input is a sentence in its parse tree structure, and the output is the classification for the input sentence, i.e., whether the meaning is very negative, negative, neutral, positive, or very positive
- Temporal Difference (TD) Learning: refers to a class of model-free Reinforcement Learning (RL) methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. While Monte Carlo methods only adjust their estimates once the final outcome is known, TD methods adjust predictions to match later, more accurate, predictions about the future before the final outcome is known.
What Time Is It?
- DARPA Making Progress on Miniaturized Atomic Clocks for Future PNT Applications | US Defense Advanced Research Projects Agency (DARPA)
|
|
|
|
|
|
|
|
|
|
The Earth's rotation is so accurate it varies only in milliseconds ...do you feel the Earth rotation slowing down?
Light Clock 1905 - Einstein's Thought Experiment
Imagine you have a special clock that works with light. This clock has two mirrors facing each other, and a beam of light bounces up and down between them. Every time the light goes from the bottom mirror to the top and back down, it counts as one tick of the clock. Einstein's light clock thought experiment shows that when things move fast, time slows down for them. This surprising idea helps us understand the nature of time and motion in our universe. Now, let's think about this clock in two different situations.
Situation 1: Standing Still: First, picture the clock sitting on a table, not moving at all. The light goes straight up to the top mirror and straight back down to the bottom mirror. If you measured the time it takes for the light to do this, you would see it takes a certain amount of time for one tick.
Situation 2: Moving Clock: Now, imagine you place the clock on a skateboard and push it so it's moving. As the clock moves, the light beam has to travel a different path. Instead of going straight up and down, it now has to go in a diagonal path because the mirrors are moving while the light is traveling. It's like when you throw a ball to a friend while running; the ball has to cover more distance because both of you are moving.
What This Means ... Because the light in the moving clock has to travel a longer, diagonal path, it takes more time for one tick to happen compared to when the clock is standing still. This means that for someone watching the moving clock, time appears to run slower for the moving clock compared to a clock that's not moving. This idea is called time dilation. It means that time actually passes at different rates depending on how fast something is moving. If you were riding on the skateboard with the clock, you wouldn't notice anything different about the clock's ticks. But someone standing still and watching you would see that your clock ticks more slowly.
Why It Matters ... This thought experiment helps us understand that time isn't the same everywhere and can be different depending on how fast things are moving. This concept is a key part of Einstein's theory of special relativity, which helps scientists understand how the universe works, especially when things are moving very fast, like spaceships or particles in a collider.
Precision Time Protocol (PTP)
YouTube search... ...Google search
- Precision Time Protocol PTP-1588 | IEEE ...High precision clock synchronization that computes latency and offset
- How Precision Time Protocol is being deployed at Meta | Oleg Obleukhov & Ahmad Byagowi - CONNECTIVITY, NETWORKING & TRAFFIC, OPEN SOURCE, PRODUCTION ENGINEERING, UNCATEGORIZED, WEB
- PTP IEEE 1588v2 | Juniper Networks ...Time Management Administration Guide
The Precision Time Protocol (PTP) is a protocol used to synchronize clocks throughout a computer network. On a local area network, it achieves clock accuracy in the sub-microsecond range, making it suitable for measurement and control systems.[1] PTP is currently employed to synchronize financial transactions, mobile phone tower transmissions, sub-sea acoustic arrays, and networks that require precise timing but lack access to satellite navigation signals.Wikipedia
Overall, its structure is similar to NTP in that there are different levels within it and GPS satellites can serve as its time source. However, the major difference between Network Time Protocol (NTP) and PTP is that PTP is accurate to microseconds, meaning that it is more exact than NTP
|
|
YouTube search... ...Google search
- Time ... PNT ... GPS ... Retrocausality ... Delayed Choice Quantum Eraser ... Quantum
- Case Studies
- Autonomous Drones
- Deepmind teaches AI to follow navigational directions like humans | Tristan Greene
- History of Navigation | Wikipedia
- Department of Homeland Security (DHS) Science and Technology (S&T) Positioning, Navigation, and Timing (PNT) Program
- Navigation Aids | Department of Transportation, Federal Aviation Administration
- VN-300 | Vectornav ...miniature, high-performance Dual Antenna Global Navigation Satellite Systems (GNSS)-Aided Inertial Navigation System (INS) that combines micro-electromechanical systems (MEMS) inertial sensors, two high-sensitivity GNSS receivers, and advanced Kalman filtering algorithms to provide optimal estimates of position, velocity, and orientation.
Navigation is a field of study that focuses on the process of monitoring and controlling the movement of a craft or vehicle from one place to another.[1] The field of navigation includes four general categories: land navigation, marine navigation, aeronautic navigation, and space navigation. Navigation | Wikipedia
|
|
|
|
Global Positioning System (GPS)
YouTube search... ...Google search
- Time ... PNT ... GPS ... Retrocausality ... Delayed Choice Quantum Eraser ... Quantum
- Astronomy
- GPS has been copied by Russia's GLONASS, Europe’s Galileo, China's BeiDou, India’s IRNSS, and Japan’s QZSS
- Artificial intelligence in GPS navigation systems | Jeffrey L. Duffany
- RoadTagger: GPS system upgrade utilizes AI to make sure you're in the right lane | David Nield - New Atlas ...Artificial intelligence to update digital maps and improve GPS navigation | Amit Malewar - InceptiveMind
- GPS.gov ...Timing
- Inside GNSS ...Global Navigation Satellite Systems
- Navstar | Space.com ...is a network of U.S. satellites that provide GPS services
- SpaceX launches third-generation GPS navigation satellite | CBS News ...GPS-3 satellite — the fourth in a series of more powerful third-generation navigation stations built by Lockheed Martin — was expected to be deployed about a 90 minutes after liftoff. Assuming tests and checkout go well, it will join a globe-spanning constellation of 31 GPS satellites.
- Air Force asks three U.S. contractors to develop miniature ASIC technology for next-gen GPS receivers | John Keller - Military & Aerospace Electronics ...small low-power-consumption GPS enabling technologies to include a next-generation ASIC for secure GPS land navigation.
- China Launches Beidou, Its Own Version of GPS | Andrew Jones - IEEE Spectrum ...China places the final Beidou navigation system satellite into orbit
- Big News For ISRO! Indian Navigation System (IRNSS) Gets Approval By IMP For Global Operations | Smriti Chaudhary - The EurAsuan Times
GPS receivers that use the L5 band can pinpoint to within 30 centimeters or 11.8 inches. The GPS concept is based on time and the known position of GPS specialized satellites. The satellites carry very stable atomic clocks that are synchronized with one another and with the ground clocks. Any drift from time maintained on the ground is corrected daily. In the same manner, the satellite locations are known with great precision. GPS receivers have clocks as well, but they are less stable and less precise. Each GPS satellite continuously transmits a radio signal containing the current time and data about its position. Since the speed of radio waves is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the receiver receives it is proportional to the distance from the satellite to the receiver. A GPS receiver monitors multiple satellites and solves equations to determine the precise position of the receiver and its deviation from true time. At a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and clock deviation from satellite time). Global Positioning System | Wikipedia
|
|
|
|
|
|
|
|
Deep-Space Positioning System (DPS)
YouTube search... ...Google search
- NASA is Making An AI-Based GPS For Space | Kristin Houser
- Frontier Development Lab (FDL) ...Artificial Intelligence Research for Space Science, Exploration & All Humankind
|
Jamming and Spoofing
YouTube search... ...Google search
- The Resilient Navigation and Timing Foundation
- Department of Homeland Security (DHS) Science and Technology (S&T) Resilient Positioning, Navigation, and Timing (PNT) Conformance Framework
- The Space Force: A Conversation With United States Secretary Of The Air Force Barbara Barrett | Steve Forbes - Forbes ... We are vulnerable. For example, the U.S. and the global economy are totally dependent on satellites, most especially the GPS, which is operated by the Space Force.
|