Trending December 2023 # How Machine Learning Algorithms & Hardware Power Apple’s Latest Watch And Iphones # Suggested January 2024 # Top 15 Popular

You are reading the article How Machine Learning Algorithms & Hardware Power Apple’s Latest Watch And Iphones updated in December 2023 on the website Bellydancehcm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 How Machine Learning Algorithms & Hardware Power Apple’s Latest Watch And Iphones

Introduction

This is a great time to be a data scientist – all the top tech giants are integrating machine learning into their flagship products and the demand for such professionals is at an all-time high. And it’s only going to get better!

In this article, we’ll check out some of the ways Apple has used machine learning to enrich the user experience. And believe me, some of the numbers you’ll see will blow your mind.

And if you’re already itching to get started with building your first ML models on an iPhone using Apple’s CoreML, check out this excellent article.

The A12 Chip

Source: The Verge

The A12 is using features as small as 7 nanometers as compared to 10 in the A11, which explains the acceleration in speed. And did you really think Apple would let the event slide without mentioning battery life? The A12 chip has a smart compute system that automatically recognizes which tasks should run on the primary part of the chip, which ones should be sent to the GPU, and which ones should be delegated to the neural engine.

So what’s the deal with the neural engine?

Source: Apple Insider

The neural engine’s key functions are two-fold:

To use facial recognition algorithm with super quick speed to authenticate Face ID. This algorithm uses neural networks to map certain facial features/points and has of course been trained on millions of images to avoid making a mistake in the real product. It’s critical that the algorithm factors in physical objects like glasses and people’s hair into account, which Apple says it will do this year with even more accuracy

To track facial movement for Animojis. Similar to the description above, the algorithm maps certain facial features and converts them into an animal emoji face in real-time

This year’s engine has eight cores which is how the chip can perform 5 trillion operations per second. Last year’s version had two cores and could go up to 600 billion operations per second. It’s a nice microcosm of how rapidly technology is evolving in front of our eyes.

And the neural engine can do even more..

It will help iPhone users take better pictures (how much better can you get every year?!). When you press the shutter button, the neural network identifies the kind of scene in the lens, and makes a clear distinction between any object in the image and the background. So next time you take a photograph, just remember how quick the neural network must be, to do all this in a matter of milliseconds.

You can learn all about object detection and computer vision algorithms in our ‘Computer Vision using Deep Learning‘ course! It’s a comprehensive offering and an invaluable addition to your machine learning skillset.

The Apple Watch

The Apple Watch Series 4 feels like a health monitoring device more than at any point since it’s debut four years back. Of course all the excitement is around the watch’s design and how it’s 35% bigger than last year’s product. But let’s step out of that limelight and look at one of the more intriguing features – new health sensors.

The Watch comes with an electrocardiogram (ECG) sensor. Why is this important, you ask? Well for starters, it’s the first smartwatch to pack in this feature. But more importantly, the sensor measure not just your heart’s rate, but also it’s rhythm. This helps monitor any irregular rhythm and the Watch immediately alerts you in case of any impending danger. These sensors have been approved by the FDA and the American Heart Association.

Further, these the Series 4 watches are integrated with an improved accelerometer and gyroscope. This will help the sensors in detecting if the wearer has fallen over. Once a person has fallen over and shown no sign of movement for 60 seconds, the device sends out an emergency call to up to five (pre-defined) emergency contacts simultaneously.

I’m sure you must have guessed by now what’s behind all these updates? Yes, it’s machine learning. Healthcare, as I mentioned in this article, is ripe for taking in machine learning terms. There are billions of data points at play, and combining ML with domain expertise is where the jackpot lies. I’m glad to see companies like Apple utilizing it, albeit in their own products.

End Notes

The competition between the likes of Apple, Google, and others is heating up and artificial intelligence and machine learning could be the key to winning the battle. Hardware is critical here – as it gets significant upgrades each year, more and more complex algorithms can be built in.

Fascinated by all this and looking for a way to get started with data science? Try out our ‘Introduction to Data Science‘ course today! We will help you take your first steps into this awesome new world.

You couldn’t have picked a better time to get into data science, honestly. A quick glance at Apple’s official job postings shows more than 400 openings for machine learning related positions. The question then remains whether there are enough experienced people to fulfill that demand.

You can view the entire Apple event here.

Related

You're reading How Machine Learning Algorithms & Hardware Power Apple’s Latest Watch And Iphones

Build And Automate Machine Learning

Intel keeps on eating up new businesses to work out its machine learning and AI tasks.

In the most recent move, TechCrunch has discovered that the chip goliath has obtained chúng tôi an Israeli organization that has manufactured and works a platform for information scientists to assemble and run machine learning models, which can be utilized to prepare and follow various models and run examinations on them, construct proposals and the sky is the limit from there.

Intel affirmed the securing to us with a short note. “We can affirm that we have procured Cnvrg,” a representative said.

Also read: Top 10 Trending Technologies You should know about it for Future Days

Intel isn’t uncovering any monetary terms of the arrangement, nor who from the startup will join Intel.

Cnvrg, helped to establish by Yochay Ettun (CEO) and Leah Forkosh Kolben, had raised $8 million from financial specialists that incorporate Hanaco Venture Capital and Jerusalem Venture Partners, and PitchBook gauges that it was esteemed at around $17 million in its last round.

It was just seven days back that Intel made another procurement to help its AI business, additionally in the region of machine getting the hang of demonstrating: it got SigOpt, which had built up a streamlining platform to run machine picking up displaying and reenactments.

Also read: Top 10 Programming Languages for Kids to learn

Cnvrg.io’s platform works across on-reason, cloud and half and half conditions and it comes in paid and complementary plans (we covered the dispatch of the free help, marked Core, a year ago).

It contends with any semblance of Databricks, Sagemaker and Dataiku, just as more modest activities like chúng tôi that are based on open-source structures.

Cnvrg’s reason is that it gives an easy to use platform to information scientists so they can focus on formulating calculations and estimating how they work, not fabricating or keeping up the platform they run on.

While Intel isn’t saying much regarding the arrangement, it appears to be that a portion of a similar rationale behind a week ago’s SigOpt procurement applies here also: Intel has been pulling together its business around cutting edge chips to more readily contend with any semblance of Nvidia and more modest players like GraphCore.

So it bodes well to likewise give/put resources into AI apparatuses for clients, explicitly administrations to help with the process stacks that they will be running on those chips.

It’s prominent that in our article about the Core complementary plan a year ago, Frederic noticed that those utilizing the platform in the cloud can do as such with Nvidia-improved holders that sudden spike in demand for a Kubernetes bunch.

Also read: Best 12 Vocabulary Building Apps for Adults 2023?

Intel’s attention on the up and coming age of figuring expects to balance decreases in its inheritance activities. In the last quarter, Intel announced a 3% decrease in its incomes, driven by a drop in its server farm business.

It said that it’s anticipating the AI silicon market to be greater than $25 billion by 2024, with AI silicon in the server farm to be more prominent than $10 billion in that period.

Understand Machine Learning And Its End

This article was published as a part of the Data Science Blogathon.

What is Machine Learning?

Machine Learning: Machine Learning (ML) is a highly iterative process and ML models are learned from past experiences and also to analyze the historical data. On top, ML models are able to identify the patterns in order to make predictions about the future of the given dataset.

Why is Machine Learning Important?

Since 5V’s are dominating the current digital world (Volume, Variety, Variation Visibility, and Value), so most of the industries are developing various models for analyzing their presence and opportunities in the market, based on this outcome they are delivering the best products, services to their customers on vast scales.

What are the major Machine Learning applications?

Machine learning (ML) is widely applicable in many industries and its processes implementation and improvements. Currently, ML has been used in multiple fields and industries with no boundaries. The figure below represents the area where ML is playing a vital role.

Where is Machine Learning in the AI space?

Just have a look at the Venn Diagram, we could understand where the ML in the AI space and how it is related to other AI components.

As we know the Jargons flying around us, let’s quickly look at what exactly each component talks about.

How Data Science and ML are related?

Machine Learning Process, is the first step in ML process to take the data from multiple sources and followed by a fine-tuned process of data, this data would be the feed for ML algorithms based on the problem statement, like predictive, classification and other models which are available in the space of ML world. Let us discuss each process one by one here.

Machine Learning – Stages: We can split ML process stages into 5 as below mentioned in the flow diagram.

Collection of Data

Data Wrangling

Model Building

Model Evaluation

Model Deployment

Identifying the Business Problems, before we go to the above stages. So, we must be clear about the objective of the purpose of ML implementation. To find the solution for the given/identified problem. we must collect the data and follow up the below stages appropriately.

Data collection from different sources could be internal and/or external to satisfy the business requirements/problems. Data could be in any format. CSV, XML.JSON, etc., here Big Data is playing a vital role to make sure the right data is in the expected format and structure.

Data Wrangling and Data Processing: The main objective of this stage and focus are as below.

Data Processing (EDA):

Understanding the given dataset and helping clean up the given dataset.

It gives you a better understanding of the features and the relationships between them

Extracting essential variables and leaving behind/removing non-essential variables.

Handling Missing values or human error.

Identifying outliers.

The EDA process would be maximizing insights of a dataset.

Handling missing values in the variables

Convert categorical into numerical since most algorithms need numerical features.

Need to correct not Gaussian(normal). linear models assume the variables have Gaussian distribution.

Finding Outliers are present in the data, so we either truncate the data above a threshold or transform the data using log transformation.

Scale the features. This is required to give equal importance to all the features, and not more to the one whose value is larger.

Feature engineering is an expensive and time-consuming process.

Feature engineering can be a manual process, it can be automated

Training and Testing:

the efficiency of the algorithm which is used to train the machine.

Test data is used to see how well the machine can predict new answers based on its training.

used to train the model.

Training

Training data is the data set on which you train the model.

Train data from which the model has learned the experiences.

Training sets are used to fit and tune your models.

Testing

learnt good enough from the experiences it got in the train data set.

are “unseen” data to evaluate your models.

Test data: After the training the model, test data is used to test its efficiency and performance of the model

The purpose of the random state in train test split: Random state ensures that the splits that you generate are reproducible. The random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.

Data Split into Training/Testing Set

We used to split a dataset into training data and test data in the machine learning space.

The split range is usually 20%-80% between testing and training stages from the given data set.

A major amount of data would be spent on to train your model

The rest of the amount can be spent to evaluate your test model.

But you cannot mix/reuse the same data for both Train and Test purposes

If you evaluate your model on the same data you used to train it, your model could be very overfitted. Then there is a question of whether models can predict new data.

Therefore, you should have separate training and test subsets of your dataset.

MODEL EVALUATION: Each model has its own model evaluation mythology, some of the best evaluations are here.  

Evaluating the Regression Model.

Sum of Squared Error (SSE)

Mean Squared Error (MSE)

Root Mean Squared Error (RMSE)

Mean Absolute Error (MAE)

Coefficient of Determination (R2)

Adjusted R2

Evaluating Classification Model.

Confusion Matrix.

Accuracy Score.

Deployment of an ML-model simply means the integration of the finalized model into a production environment and getting results to make business decisions.

So, Hope you are able to understand the Machine Learning end-to-end process flow and I believe it would be useful for you, Thanks for your time.

Related

Machine Learning Is Revolutionizing Stock Predictions

Stock predictions made by machine learning are being deployed by a select group of hedge funds that are betting that the technology used to make facial recognition systems can also beat human investors in the market.

Computers have been used in the stock market for decades to outrun human traders because of their ability to make thousands of trades a second. More recently, algorithmic trading has programmed computers to buy or sell stocks the instant certain criteria is met, such as when a stock suddenly becomes cheaper in one market than in another — a trade known as arbitrage.

Software That Learns to Improve Itself

Machine learning, an offshoot of studies into artificial intelligence, takes the stock trading process a giant step forward. Pouring over millions of data points from newspapers to TV shows, these AI programs actually learn and improve their stock predictions without human interaction.

According to Live Science, one recent academic study said it was now possible for computers to accurately predict whether stock prices will rise or fall based solely on whether there’s an increase in Google searches for financial terms such as “debt.” The idea is that investors get nervous before selling stocks and increase their Google searches of financial topics as a result.

These complex software packages, which were developed to help translate foreign languages and recognize faces in photographs, now are capable of searching for weather reports, car traffic in cities and tweets about pop music to help decide whether to buy or sell certain stocks.

Finding Work-Life Balance in the Financial World

White Paper

View this infographic to learn how to use your smartphone to work smarter — not harder. Download Now

Mimicking Evolution and the Brain’s Neural Networks

A number of hedge funds have been set up that use only technology to make their trades. They include Sentient Technologies, a Silicon Valley-based fund headed by AI scientisk Babak Hodjat; Aidiya, a Hong Kong-based hedge fund headed by machine learning pioneer Ben Goertzel; and a fund still in “stealth mode” headed by Shaunak Khire, whose Emma computer system demonstrated that it could write financial news almost as well as seasoned journalists.

Although these funds closely guard their proprietary methods of trading, they involve two well-established facets of artificial intelligence: genetic programs and deep learning. Genetic software tries to mimic human evolution, but on a vastly faster scale, simulating millions of strategies using historic stock price data to test the theory, constantly refining the winner in a Darwinian competition for the best. While human evolution took two million years, these software giants accomplish the same evolutionary “mutations” in a matter of seconds.

Deep learning, on the other hand, is based on recent research into how the human brain works, employing many layers of neural networks to make connections with each other. A recent research study from the University of Freiburg, for example, found that deep learning could predict stock prices after a company issues a press release on financial information with about 5 percent more accuracy than the market.

Hurdles the Prediction Software Faces

None of the hedge funds using the new technology have released their results to the public, so it’s impossible to know whether these strategies work yet. One problem they face is that stock trading is not what economists call frictionless: There is a cost every time a stock is traded, and stocks don’t have one fixed price to buyers and sellers, but rather a spread between bid and offer, which can make multiple buy-and-sell orders expensive. Additionally, once it’s known that a particular program is successful, others would rush to duplicate it, rendering such trades unprofitable.

Another potential problem is the possible effects of so-called “black swan” events, or rare financial events that are completely unforeseen, such as the 2008 financial crisis. In the past, these types of events have derailed some leading hedge funds that relied heavily on algorithmic trading. Traders recall that the immensely profitable Long-Term Capital Management, which had two Nobel Prize-winning economists on its board, lost $4 billion in a matter of weeks in 1998 when Russia unexpectedly defaulted on its debt.

Some of the hedge funds say they have a human trader overseeing the computers who has the ability to halt trading if the programs go haywire, but others don’t.

The technology is still being refined and slowly integrated into the investing process at a number of firms. While the software can think for itself, humans still need to set the proper parameters to guide the machines toward a profitable outcome.

Technology and industry trends are shaping the next era of finance. Check out our complete line of finance industry solutions to stay ahead of the competition.

Knowledge Enhanced Machine Learning: Techniques & Types

This article was published as a part of the Data Science Blogathon.

Introduction

In machine learning, the data is an esse of the training of machine learning algorithms. The amount of data a the data quality highly affect the results from the machine learning algorithms. Almost all machin rning algorithms are data dependent, and their performance can be enhanced until some thresh mount of the data. However, the traditional machine learning algorithm’s behavior tends to be constant after some data is fed to the model.

This article will discuss the knowledge-enhanced machine learning techniques that introduce hierarchical and symbolic methods with limited data. Here we will discuss these methods, their relevance, and working mechanisms, followed by other vital discussions related to them. These methods are proper when there is little data and a need to train an accurate machine-learning model. the article will help one to understand the concepts related to knowledge and enhance machine learning better, and will able to make efficient choices and decisions in case of limited data scenarios.

Knowledge Enhanced Machine Learning

As the name suggests, knowledge enhanced machine learning is a type of technique where the knowledge of machine learning algorithms is enhanced by human capabilities or human understanding. In this technique, the machine learning algorithms apply their knowledge, and human or domain knowledge is integrated.

We humans can be trained on limited data, meaning that humans can learn several things by seeing or practicing stuff quickly and with limited data. For example, If we see a particular device, let’s say a Laptop, we can easily classify it and say it’s a type of electronic device. Also, we can classify it as an HP, Dell, or another model.

Source – Google

Machine learning models can classify several objects and perform specific tasks very quickly and efficiently, but the only problem is the amount of data. Yes, it requires a lot more amount of data to train an accurate model. But the knowledge-enhanced machine learning approach comes into the picture; it combines majorly two fields, the first is the model’s knowledge, and the other is human knowledge or human capabilities.

Hierarchical Learning and Symbolic Methods are knowledge enhancement machine learning approaches where human knowledge can be used to train a machine learning model with limited data, and the model’s performance can be en d.

Hierarchical Learning

As discussed above, when we humans see particular objects, our human mind automatically tries to classify the object into several classes. Let’s try to understand the same thing by taking appropriate examples.

Source – Google

As discussed above, the human mind can be taken as a trained machine learning model on limited data that classifies the object as a spot into several categories. Let’s take an example to understand the same thing.

Let’s suppose you saw the dog. Looking at the dog, we can easily classify its parent category as a “pet” and classify the dog as a labrador, dalmatian, french bulldog, or poodle. Here we can see that there are several levels of hierarchy where every single layer has several categories, and based on the knowledge of hierarchy, we humans can classify objects.

To implement this approach, the machine learning model can be trained on every layer of the hierarchy, and the model can be hyper-tuned to obtain the hierarchical learning model.

Symbolic Methods

The symbolic methods are also a knowledge base machine learning approach that tries to integrate human knowledge to classify several objects and build an accurate machine learning model.

Some machine learning models are trained so that whenever they are given an unseen image or object, they can efficiently and accurately classify the particular thing. These models are trained on a large amount of data.

We implement the same thing in symbolic methods but with limited data. Here we create the description or tags for the various objects and feed the data to the model. As there is little data available, there will be few images to train the model on, but the description of many objects will still be available.

Source – Google

Once the model is trained on such data, it can efficiently classify the unseen objects without training in such an image dataset, as it will use the description or tags provided based on human knowledge. So here, human knowledge is used to create the descriptions or tags of several objects, and machine learning models are used to train on such data.

Hierarchical vs. Symbolic Methods

As both approaches use human knowledge, a gentle question might come to mind: What is the main difference between such techniques?

The hierarchical learning approach is more towards a hierarchy of an object and its classification. Here human knowledge is used to classify and create the hierarchy of an object. Then machine learning models are used to train the algorithm on every level of the hierarchy for limited data scenarios.

In Symbolic methods, human knowledge is used to create the descriptions or the tags for particular objects, where the machine learning models are trained on limited image data. This machine learning model can now perform classification tasks on unseen images using human-generated descriptions or tags.

In general, we can not say that one of the approaches is always better, it all depends on the specific scenario of the data models and problem statement. Both approaches are nowadays being used for better performance on limited data, but one could use a specific approach per the requirement and conditions associated.

Conclusion

This article discussed knowledge enhanced machine learning techniques and their types. The hierarchical and symbolic approaches are discussed in detail with their core intuition and the difference between such methods. These articles will help geeks to understand the limited data scenario better and in an efficient way. They will help in several interviews and examinations as it is more of an academic topic.

Some Key Takeaways from this article are:

1. Knowledge-enhanced machine learning is a technique where human knowledge is used to train a machine learning model.

2. In the hierarchical technique, the machine learning model is strained on every hierarchy level generated by human knowledge or domain experts.

3. Symbolic methods also use human knowledge to generate descriptions or tags related to several objects so that the machine learning model can also classify unseen images and objects.

4. Both of the approaches are useful for specific cases and can be implemented as per the requirement and problem statement related to machine learning.

Want to Contact the Author?

Follow Parth Shukla @AnalyticsVidhya, LinkedIn, Twitter, and Medium for more content.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Related

Back Propagation In Neural Network: Machine Learning Algorithm

Before we learn Back Propagation Neural Network (BPNN), let’s understand:

What is Artificial Neural Networks?

A neural network is a group of connected I/O units where each connection has a weight associated with its computer programs. It helps you to build predictive models from large databases. This model builds upon the human nervous system. It helps you to conduct image understanding, human learning, computer speech, etc.

What is Backpropagation?

Backpropagation is the essence of neural network training. It is the method of fine-tuning the weights of a neural network based on the error rate obtained in the previous epoch (i.e., iteration). Proper tuning of the weights allows you to reduce error rates and make the model reliable by increasing its generalization.

Backpropagation in neural network is a short form for “backward propagation of errors.” It is a standard method of training artificial neural networks. This method helps calculate the gradient of a loss function with respect to all the weights in the network.

In this tutorial, you will learn:

How Backpropagation Algorithm Works

The Back propagation algorithm in neural network computes the gradient of the loss function for a single weight by the chain rule. It efficiently computes one layer at a time, unlike a native direct computation. It computes the gradient, but it does not define how the gradient is used. It generalizes the computation in the delta rule.

Consider the following Back propagation neural network example diagram to understand:

How Backpropagation Algorithm Works

Inputs X, arrive through the preconnected path

Input is modeled using real weights W. The weights are usually randomly selected.

Calculate the output for every neuron from the input layer, to the hidden layers, to the output layer.

Calculate the error in the outputs

ErrorB= Actual Output – Desired Output

Travel back from the output layer to the hidden layer to adjust the weights such that the error is decreased.

Keep repeating the process until the desired output is achieved

Why We Need Backpropagation?

Backpropagation is fast, simple and easy to program

It has no parameters to tune apart from the numbers of input

It is a flexible method as it does not require prior knowledge about the network

It is a standard method that generally works well

It does not need any special mention of the features of the function to be learned.

What is a Feed Forward Network?

A feedforward neural network is an artificial neural network where the nodes never form a cycle. This kind of neural network has an input layer, hidden layers, and an output layer. It is the first and simplest type of artificial neural network.

Types of Backpropagation Networks

Two Types of Backpropagation Networks are:

Static Back-propagation

Recurrent Backpropagation

Static back-propagation:

It is one kind of backpropagation network which produces a mapping of a static input for static output. It is useful to solve static classification issues like optical character recognition.

Recurrent Backpropagation:

Recurrent Back propagation in data mining is fed forward until a fixed value is achieved. After that, the error is computed and propagated backward.

The main difference between both of these methods is: that the mapping is rapid in static back-propagation while it is nonstatic in recurrent backpropagation.

History of Backpropagation

In 1961, the basics concept of continuous backpropagation were derived in the context of control theory by J. Kelly, Henry Arthur, and E. Bryson.

In 1969, Bryson and Ho gave a multi-stage dynamic system optimization method.

In 1974, Werbos stated the possibility of applying this principle in an artificial neural network.

In 1982, Hopfield brought his idea of a neural network.

In 1986, by the effort of David E. Rumelhart, Geoffrey E. Hinton, Ronald J. Williams, backpropagation gained recognition.

In 1993, Wan was the first person to win an international pattern recognition contest with the help of the backpropagation method.

Backpropagation Key Points

Simplifies the network structure by elements weighted links that have the least effect on the trained network

You need to study a group of input and activation values to develop the relationship between the input and hidden unit layers.

It helps to assess the impact that a given input variable has on a network output. The knowledge gained from this analysis should be represented in rules.

Backpropagation is especially useful for deep neural networks working on error-prone projects, such as image or speech recognition.

Best practice Backpropagation

Backpropagation in neural network can be explained with the help of “Shoe Lace” analogy

Too little tension =

Not enough constraining and very loose

Too much tension =

Too much constraint (overtraining)

Taking too much time (relatively slow process)

Higher likelihood of breaking

Pulling one lace more than other =

Discomfort (bias)

The actual performance of backpropagation on a specific problem is dependent on the input data.

Back propagation algorithm in data mining can be quite sensitive to noisy data

You need to use the matrix-based approach for backpropagation instead of mini-batch.

Summary

A neural network is a group of connected it I/O units where each connection has a weight associated with its computer programs.

Backpropagation is a short form for “backward propagation of errors.” It is a standard method of training artificial neural networks

Back propagation algorithm in machine learning is fast, simple and easy to program

A feedforward BPN network is an artificial neural network.

Two Types of Backpropagation Networks are 1)Static Back-propagation 2) Recurrent Backpropagation

In 1961, the basics concept of continuous backpropagation were derived in the context of control theory by J. Kelly, Henry Arthur, and E. Bryson.

Back propagation in data mining simplifies the network structure by removing weighted links that have a minimal effect on the trained network.

It is especially useful for deep neural networks working on error-prone projects, such as image or speech recognition.

The biggest drawback of the Backpropagation is that it can be sensitive for noisy data.

Update the detailed information about How Machine Learning Algorithms & Hardware Power Apple’s Latest Watch And Iphones on the Bellydancehcm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!