You are reading the article Why It’s Time To Take Linkedin Seriously updated in December 2023 on the website Bellydancehcm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Why It’s Time To Take Linkedin Seriously
If you’ve ever wondered about the relevancy of LinkedIn, doubt no more. The business-related social network passed the 10-year milestone this weekend and now counts more than 225 million members.
But is there really value in spending time on LinkedIn? Is it worth your time to create a robust profile there? Can this platform benefit your business? The answer is yes, on all counts. Here’s why.
What well-connected entrepreneurs sayfsdfsfd
From my nearly 700 LinkedIn connections, I tapped a slew of entrepreneurs with more than 500 connections each about how they use the platform. The consensus: Time using LinkedIn is well-spent. Here’s what they said.
“My company has just joined the Microsoft Accelerator for Azure powered by TechStars and they have challenged us to do 100 customer discovery interviews in an amazingly short time frame. I’ve used LinkedIn for years and connected dutifully with people I’ve met with professionally, I’ve joined groups and followed them, but until recently I never really tried to mine the network for second level introductions – that is the part that has been interesting to me.”
—Mary Haskett, Co-founder and CEO at Austin, Texas-based Beehive Biometrics
“Just today a contact whose company is having problems reached out. I wanted to help him, and the first thing I could think was to e Endorse him for skills. I have also been connected with scientists and researchers that I was third-degree LinkedIn connected with but never could have gotten access to without LinkedIn.”
—Amy Baxter, M.D. and CEO at Atlanta-based MMJ Labs, a medical device company
“Honestly, the way it helps me the most is being able to check out candidates before we book interviews with them and to gain intelligence about new hires at client companies, competition and potential partners. Nothing unusual, but we love the real-time aspect of LinkedIn vs. the old ways of calling a friend who knows a friend.”
—Jean Achille, founder and CEO at PR and marketing firms the Devon Group and Devon Interactive
—Felena Hanson, founder of the San Diego co-working space HeraHub
“LinkedIn is essential to my business. We use the information to determine how to target our business development efforts which increases our efficiency and reduces interruptions to contacts we might have contacted who weren’t the right role. Because of LinkedIn we reach out and make the right offer to the right contact most of the time. This increases our sales and, we think, our reputation with our business partners.”
I primarily use LinkedIn as a credential network rather than a peer communication network. I’ve found on many occasions, LinkedIn is the primary social network for individuals who are less comfortable using other social networks like Facebook or Twitter. Additionally, it does give an option to contact others and it’s useful to see past experience and other connections. I personally use Twitter far more effectively for professionally communicating with people but I have gotten into contact with individuals via LinkedIn I wouldn’t have been able to get otherwise.
—Coty Beasley, co-founder and chief design officer with Kansas City, Missouri-based CandyCam Mulitmedia Robotics
“I track blog posts, notes, musings from colleagues, and trends of where everyone is working or looking to work. [There’s a] big shift to mobile, cloud, social, as opposed to other jobs that are not hot.”
—Carlos Icaza, co-founder and CEO at Mountain View, California-based game engine startup Lanica
“In the sales business case, I use Linkedin to discover the internal organigram, without having to directly contact the organization in the presales process. I try to understand who is the boss of who, who has connection with whom so that then I can try to reach out people who are not directly on the deal at hand but can give me info on the people who are.”
—Denis Harscoat, Switzerland-based co-founder and CEO of the action-tracking app DidThis
How to make better use of LinkedInCultivate as many contacts as possible.
To get the most benefit from LinkedIn, invest some time building out your profile. It’s important to include specific, significant results. For entrepreneurs this means communicating traction—things like funding dollars from big shot VCs, incredible user numbers or the unique value you deliver to customers.
Speaking of expanding your network, you want to have as many connections on LinkedIn as possible. Even though LinkedIn itself says it only wants to bring together people who actually know each other, I accept connection requests all the time from people I don’t know. Unlike Facebook where I might be sharing personal or location information I wouldn’t want strangers to have, on LinkedIn it’s all about business, and the more people you have access to, the better.
You're reading Why It’s Time To Take Linkedin Seriously
It’s Time To Start Paying Attention To Nipah Virus
We’ve known about Nipah virus—and that it could cause a global pandemic—for twenty years. But we’re still in the first stages of fighting it.
There’s “currently no specific drugs or vaccines for Nipah virus infection,” Linfa Wang, a bat-borne virus expert at Duke University’s Global Health Institute and conference co-chair, told Reuters this week. Wang and other experts are currently gathered in Singapore for the first-ever Nipah virus conference. At the two-day event they’re talking about everything from the history of Nipah outbreaks to ways forward for containing the disease. They’re also hoping to raise its profile.
Nipah’s first recorded outbreak, which hit Malaysia in 1999, killed 105 of the 265 people known to be infected. We’ve since seen numerous outbreaks in South Asian countries like Singapore, Bangladesh, and India. In Bangladesh, the World Health Organization notes, Nipah outbreaks have occurred “nearly annually” since 2001.
The Nipah virus (NiV) is named after Sungai Nipah, a Malaysian village where it was found to have infected pig farmers. The first outbreaks in Malaysia and Singapore were halted after authorities killed more than a million pigs in the country’s largest-ever animal culling, which also represented a huge financial loss, the CDC notes
But it turned out that pigs were just incubators for a disease that had been around for a while in fruit bats. “It doesn’t affect the fruit bat,” says epidemiologist Micah Hahn of the University of Alaska-Anchorage, whose work on Nipah was published in 2014. “But the virus can be transmitted by saliva, urine, or feces.” When people (or pigs) come into contact with the fruit bat’s bodily fluids, they can get the disease.
In humans, an early-stage Nipah infection can resemble the flu: vomiting, dizziness, and a sore throat are the usual symptoms. Some people also have a cough. But as it progresses, Nipah can cause potentially-fatal brain inflammation. The WHO estimates that between 40-75% of people who contract Nipah die, depending on how quickly doctors can identify the disease and what kind of care is available. People suffering from the virus belong in intensive care settings where their symptoms can be treated and they can be closely monitored by medical professionals, according to the WHO’s recommendations. However, such environments are few and far between in many of the places where Nipah is found, and a common way for a single case to spread is when loved ones take care of the sick and subsequently become infected.
“Although Nipah virus has caused only a few known outbreaks in Asia, it infects a wide range of animals and causes severe disease and death in people,” according to a statement from WHO. Fruit bats are the disease’s only known reservoir, but the illness can infect pigs and other domesticated animals like cats and dogs.
In the case of the Malaysia and Singapore outbreaks, it’s thought that the disease jumped from bats to pigs, who passed it to the humans who ate or worked around them via their bodily fluids. But in Bangladesh’s “Nipah belt,” which stretches diagonally across the country, outbreaks have been traced to the body fluids of fruit bats hanging around on date palms.
Untreated date palm sap is a popular drink in Bangladesh tapped straight from the tree or fermented. If fruit bats are urinating or eating or pooping in the sap, Hahn says, they can contaminate the supply. That’s the pipeline behind most outbreaks of the disease, although it’s also possible for people to transmit the virus to one another—particularly, research has shown, if they are extremely sick or suffering from symptoms that make them cough.
In Bangladesh, “We found that in the areas where most of the cases were happening [the forest] was really fragmented,” Hahn says. Although there are date palms all over the country and drinking palm sap is as common as having a beer in the evening, she says, areas with Nipah issues had people and bats living in closer proximity, the result of deforestation for farming, timber, or to make way for human settlement. “These bats are roosting in people’s backyards, essentially,” Hahn says. Areas with more space saw less of the disease. Hahn’s team is working to help farmers figure out how to keep bats out of their palms so the virus doesn’t spread.
These kind of preventative measures are urgently needed, since there’s no vaccine for the virus yet. This year’s meeting in Singapore is aiming to change that by bringing Nipah researchers together and helping to raise the disease’s public profile in the West.
Nipah isn’t here yet, it’s true—but it could come here, especially if it mutates further to be more easily passed from person to person. The human and animal suffering already caused by the disease is reason enough to try and find a vaccine, but the consequences of inaction could impact us all.
Time To Switch To Google Analytics 4? 14 Experts Give Their Take
There is only one year left before Universal Analytics goes away. So what is going to happen to our data? What is the best migration plan?
There is no doubt that these burning questions have been raised by everyone who is using this tool. With Google deciding to sunset Universal Analytics, many people are wondering what the next step is.
🚨 Note: If you need to make your quick analytics health check, have a look at our GA4 Audit guide.
We heard you and decided to address this dramatic change by talking to industry professionals and asking their opinion on it.
Should you switch to Google Analytics 4? Read on and find out!
So we have asked our experts the following questions:
Undoubtedly, migration is a complex process and should be planned carefully. Therefore, dual tagging is still a common trend.
I’ve been using GA4 as the primary tool for only a few clients so far.
Pitfalls are primarily around confusion about dimensions and reporting interface that need to be better understood before being useful.
Dana DiTomaso from Kick Point: “We are already using GA4 with clients but not always as the main tool depending on a few factors”
We are already using GA4 with clients but not always as the main tool depending on a few factors.
If they are brand new and don’t have overly complex analytics requirements, then GA4 is their primary analytics source, with reports in Google Data Studio.
If they have years of data in GA Universal, we are typically still using that for primary analysis, but the flexibility of events in GA4 means that sometimes we have a hybrid approach.
For example, if we want to capture more parameters in an event than the Universal category/action/label hierarchy would allow for, we’ll use GA4 to report on that event specifically.
That being said, all data is still presented in Google Data Studio to clients, so they don’t need to be flipping between two versions of GA.
Yehoshua Coren from Analytics Ninja: ”We aren’t using the interface at all, just BigQuery”
For the most part, no, we aren’t using GA4 with clients as the main tool. We have one client for whom GA4 is the primary tool for data collection, but we aren’t using the interface at all, just BigQuery.
Some of the reporting capabilities within Explorations are pretty good, but the product isn’t ready yet to replace the very comfortable reporting that most clients are used to in Universal Analytics.
Advantages:
BigQuery. Though the 1M events per day limit for non-GA360 clients is pretty limiting
Funnel Exploration is done nicely
Ability to have both App and Web data modelled consistently in a single location is a benefit
More robust segmentation engine compared to Universal Analytics
Pitfalls:
Reporting interface crashes pretty often (for larger data sets)
Identity resolution is sub-par (no visitor stitching, logged in users ‘change’ primary IDs mid-session)
Differences between attribution and sessionization compared to Universal Analytics make reporting differences very confusing to end users
Standard reports are much less useful than Universal Analytics’
Pathing report doesn’t let users choose their own nodes
VERY strict and asinine data collection limits. Such as character limits on user properties and event parameters. Even the ‘page_location’ dimension is truncated after 300 characters
Fred Pike from Northwoods: “I am absolutely sold on GA4 as the future”
I am still double-tagging. I’ve not moved any client to GA4 100% yet.
I’ve worked on two large-ish website relaunches recently and have done the side-by-side mapping of the user interactions we want to track.
In GA3, of course, I’m still having to stuff two or sometimes three fields into the event label or, sometimes, the event action.
YouTube tracking is a perfect example, where the event label usually has the video title, the YT URL, and, sometimes, the page the video was on.
Put that GA3 field-stuffing next to GA4 and it becomes immediately obvious how great the GA4 data model is.
I love the event-driven approach and love love love having enough parameters to handle all the data points I want to track. (At least so far!) So I am absolutely sold on GA4 as the future.
I am not a fan of the UI and I hate the limitations in the Explore report writer. But I think the data model, and knowing you’ll do much of your analysis outside of GA4, makes it viable.
I am also conscious of the 50 custom dimensions limitation (120 in the paid version), so I’ve started using three generic parameters: type, sub_type, text.
These are broadly applicable to a large number of GA4 events and, as long as you’re using them similarly across all instances, you don’t have to create a bunch of custom dimensions.
Peter O’Neill from ZHS Orchards: “GA4 configuration options are mixed, some very powerful and some missing”
We are collecting data in GA4 but not using it yet.
I think the data collection is fine, although it would be good to have nested variables (I don’t think this is available, not that close to it). The logic matches what I used for data layer structure so no issues there.
Accessing the data via BigQuery is fine, for the companies with people that can do this.
GA4 configuration options are mixed, some very powerful and some missing. Crazy that event names don’t allow spaces, the tool was created by a developer for themselves with no awareness of the actual users.
Until the non-analysts in an organisation can use a digital analytics tool, it is not good enough. GA4 currently fails.
Miroslav Varga from Escape Ltd: “GA4 is Google’s solution and you have to take it seriously, no matter about your personal attitude”
GA4 is Google’s solution and you have to take it seriously, no matter about your personal attitude. But our clients are not very excited to switch to GA4.
They treat it as a beta tool and don’t like to invest too much time-effort-money towards the migration. By default, we install both properties, but they don’t use GA4 as much as they use GA3.
There is also the trouble with GDPR and problems of transferring PII data outside the EU (DPA’s in Austria, Germany, and France already have verdicts in that sense). GA4 is still to come.
Stockton Fisher from Greater Than Marketing: “I am very excited about the future of Google Analytics”
We are not using GA4 as the main reporting tool quite yet for any clients. Although we have it fully set up, it hasn’t taken its place as the main source of data.
However, there are many things it does do really well and we do use it for. For example, the explorations are a quick and easy way to find some specific answers. The BigQuery export is a welcome integration that allows us to create custom tables combining data from GA and other sources.
Overall, I like the new data models and how it uses events for everything, I think that is much more flexible. However, it is more complex to get started with so that’s the biggest con for new people trying to break into the platform.
I love where it’s going, I am very excited about the future of Google Analytics. There are just a few more functionalities they need to build still to make it #1 for me.
Mike Rhodes from WebSavvy: “I can understand why Google needs to make the switch, but I wish they’d find a way to allow businesses to view historic data for a lot longer than seems likely”
We’re not yet using it with the majority of clients as their main analytics tool. But we have installed it in parallel with UA for clients that were willing to test it early & start gathering data.
The main problems have been the learning curve needed – both by us & the client – to understand the new reports & make the mental switch to the new event-based model.
Clients inevitably want to see ‘the new version of’ a particular report which hasn’t always been easy for us to create.
As such we haven’t yet used GA4 data in our GDS reports, those are still mostly driven by the Performance platforms with some UA data as needed.
I can understand why Google needs to make the switch, but I wish they’d find a way to allow businesses to view historic data for a lot longer than seems likely.
But we’re excited to see the predictive metrics & the fact that less data will be omitted (although I’d love more transparency about ‘modelled conversions’!)
All in all, it’s a chance we’ll embrace!
Some companies have started early migration to quickly adjust to the changes.
Zorin Radovancevic from Escape Ltd: “What most people forget though is the additional reporting suite in GA4 found under Explore which is much more powerful than any reporting feature inside the old Google Analytics Universal”
In some cases, GA4 is the primary tool for analytics – mostly for clients which are APP first in their approach as they are already used to the Firebase model and reporting.
1) RAW data access with the BQ export which gives us the ability easily enrich the data set for further use cases and a very flexible event model with the increased schema for custom dimensions and metrics even in the free version.
2) The process of (re)evaluating Analytics efforts in terms of what do we even need to port to a new solution is a good exercise and with the soon to be deprecated universal it just speeds up the process.
3) Machine learning in the backend exposed as reporting utilities (likelihood and anomalies) with the addition of Consent mode which even in beta allows for additional conversion and behaviour modelling in the future more regulated digital presence.
Documentation – it just needs to be brought to a decent level. For instance Measurement protocol (alpha/beta) and parameter reference is quite thin and in more mature implementations and use cases it is quite essential.
Feature parity with Google Analytics Universal – still not there and this should be the sole focus of further product development.
I have no doubt that GA4 will be accepted soon by the wider community but I also feel that GA will potentially lose a part of the customer base which needs a less complex preset tool which GA4 is not and will not be at least in the short term.
What differentiates Google from the other vendors is its built-in ability to quickly integrate with the entire Google stack and this will be a very heavy argument in any alternative finding process.
Moreover, the GA community will recover once enough How to articles become available for the current lacking easy-to-use reporting and implementation concepts.
Fortunately enough we have started with GA4 in its infancy and we have seen how it progressed so my view is optimistic.
Such significant changes cast doubts on the credibility of Google Analytics. As a result, different alternative tracking tools might be taken into consideration.
Gerry White from Rise at Seven: “The question for many businesses is are they prepared to rethink their metrics and re-configure something when the older version of analytics had been tuned and the teams had been trained?”
Google is sunsetting GA3 far too quickly, this isn’t a GA4 bashing thing it is simply that far too many larger businesses will need longer to transition, especially as the metrics are not one to one.
GA4 has a very different way of looking at the world and it is far more focused on users rather than sessions, it has taken me a little bit of time to understand what and where some of the data is, often a report you rely on in GA3 is simply not there but can be rebuilt.
The question for many businesses is are they prepared to rethink their metrics and re-configure something when the older version of analytics had been tuned and the teams had been trained?
The other question that is starting to pop up now is that this sudden, forced change is starting to make us question if Google is the right company to trust? Should we be investing into something like SnowPlow?
Sadly this isn’t as easy as it sounds for companies below enterprise level unless we start to see more analysts develop the ability to maintain and offer alternatives as solutions.
When it comes to GA4 migration, another important issue must be addressed: product knowledge gaps and the quickest ways to bridge them.
It is crucial to understand how the data is going to be analyzed and visualized and if this can be leveraged on your own without any external support.
Paul Koks from Online Metrics: “For clients mainly using the GA3 reporting UI for reporting/analysis, GA4 is really a pain – training of internal teams is important/required”
Here are some thoughts/observations:
For the majority of clients, GA3 is still the main tool in use. We have set up GA3 and GA4 in parallel and do a lot of extra customizations in the next few months and some probably later this year.
GA4 is still under heavy development (I feel) and not yet up to par with GA3. Should be much better at the end of 2023. Another challenge is that things are changing often (i.e. interface, list of standard dimensions, channel definitions) which requires (new) changes in the setup. For clients mainly using the GA3 reporting UI for reporting/analysis, GA4 is really a pain – training of internal teams is important/required.
Despite some remarkable improvements, the new reporting model can throw GA4 viability into question.
Brian Clifton from Verified Data: “To mitigate the issues, we are encouraging clients to use Data Studio as their reporting interface instead”
For implementation, the new data model is a huge improvement, particularly for enterprise users. For example, it provides a lot more flexibility with every data point now allowing for up to 25 associated parameters.
In addition, removing the free limit of 10M hits/month is a big plus, though it does mean considerably more planning and thought is required for the data structure from the get-go. That’s because if more than 50 custom dimensions are required (still a very generous allowance), you need to move to the paid/360 product.
Having said that, encouraging a well-thought-out plan for your data structure is a good thing, which is lacking with the “catch-all” approach of Universal Analytics.
However, I have to say the user experience of finding data and assessing reports is very poor in GA4. It’s as if the product has been built “by developers for developers”, with little consideration given to how existing users assess their website and marketing performance.
To mitigate the issues, we are encouraging clients to use Data Studio as their reporting interface instead. Maybe it is Google’s intention to focus on data collection and less on reporting, though it is an odd way to go about it.
Some experts tend to believe that these are not pitfalls yet new challenges we would have to face, as new technologies are being developed, and it is fundamental to make a mental shift as well.
We do have clients using GA4, besides some of the predictive metrics and automated insights that Google has deployed in GA4 we’ve found that clients are liking the following in GA:
Pathing and Funnels reporting is better
Combining dimensions and metrics can be a bit easier (clients sometimes struggled with mixing scoped dimensions in reporting resulting in odd or unexpected results)
The simplified data model has made reporting easier for some clients
The ability to build a left nav that is tailored to an organization’s needs instead of a standard reporting suite. We’ve had challenges in the past because there are reports available in UA that either don’t meet business needs or are for features that are not used causing confusion and frustrations for end users
eCommerce metrics are no longer session bound. If I view a product and add it to my cart but then stop for lunch and come back and buy in the afternoon reporting for that would show abandonment in the morning session and conversion in the afternoon in UA so EE reporting was a bit awkward. In GA4, with the removal of session-based data, I can now see a more accurate picture of eCommerce behavior using Funnel Reports
Elapsed time is greatly improved
Real-time debugging is much easier
Consent mode. This is one that is currently of more value for our European clients but with the changing landscape for consent and privacy, GA4 will be better positioned with more granular data controls to manage how data is collected and used
I would not call these pitfalls but more challenges some clients are facing in the transition to GA4:
The deprecation of sessions. Almost all of our clients have built their KPIs around Sessions so moving to User in GA4 is challenging. Especially for clients that were using Landing Page, Exit Page, eCommerce Conversion Rate and other Session scoped Dimensions and Metrics
Sessions and Users are calculated differently. Hit/Event scoped is really the only place you can expect parity between UA and GA4 so we’ve been encouraging our clients to dual tag so they can start understanding what the difference is for these numbers
The mental shift from Category, Action, and Label to Event Name and Parameters. We’ve been having to do a lot of assisting and educating around this to help clients start to make the shift. A lot of our clients depend on a generic event dataLayer push method in GTM with Category, Action, and Label as parameters. We’ve had to assist a number of clients to determine how to bridge that gap to a GA4 friendly Event Name and Parameters to minimize the need to involve developers to update their dataLayer in the near term. (We don’t have any clients that don’t use a TMS but I imagine this becomes even more difficult if they use hard-coded UA. Even more so if they are still using chúng tôi and have not moved to gtag.)
No views. We’ve had some clients find challenges in moving to a GA structure that doesn’t allow for the filtering and manipulation that UA provided with views
Biggest Advantages
In Tag Manager Italia we implement and manage GA4 along with Google Universal Analytics with our clients since October 2023, when the E-commerce tracking in GA4 was officially released.
Thanks to a fully customizable Data model it is possible to manage the user tracking by adopting your own data structure, without the need of relying on the standard data structure given by Universal Analytics
Ease of implementation: in GA4 everything is well defined as event nomenclature and parameters. This way, the implementation of push commands in the Google Tag Manager dataLayer is easier for the development team
Audience and Conversion management: in GA4 barely everyone can easily create audiences based on events and parameters. Moreover, conversions simply rely on deciding which events correspond to which conversions
In GA4 we can now count on more detailed and realistic dimensions and metrics that definitively give more actionable insights (e.g.: ga_session_id, ga_session_number, engagement_time_msec, user_engagement, etc..)
Even the E-commerce tracking is more complete and specific thanks to tons of new events and parameters for better analysis, planning, and optimizing digital marketing campaigns
The Explore Report offers the possibility to make in-depth and more granular data analysis. In particular, the Funnel Exploration Report allows us to use the Show Elapsed Time among the various funnel steps
The GA4 DebugView is the feature we were all waiting for, to verify that the tracking implementation is “error-free”
In order to use GA4 at full power, it’s mandatory to create a measurement plan as the first thing first in your digital analytics strategy. Implementing digital marketing and digital analytics strategies without a measurement plan is a big mistake because you risk losing track of the purpose of the tracking events you implemented. Furthermore, without a measurement plan, the possibility of making tracking errors and making wrong decisions becomes higher
GA4 is definitely a good product with great potential, but there are still a lot of things to improve.
Pitfalls
Here are the main GA4 pitfalls and a shortlist of things that should be fixed:
Custom Channel Grouping: in the current state it is impossible to create your own channel grouping. (This feature is in the roadmap)
Thresholding: if Google Signal is applied, even in standard reports you will have problems seeing the real metrics. In order to overcome this problem, you have to use BigQuery. GA4 refers to “unsampling” data, but this applies only to standard reports. In any case, the “unsampling” data are bound by Thresholding
Reports are not user-friendly (including the Explore section). For example, you can’t resize the columns of the reports. Sometimes the size of the report is too invasive and the screen space seems to be never enough to have a full report overview
The Realtime Report – in its current state – doesn’t allow to have a clear and immediate “picture” of the situation (at least, not like the “old” Universal Analytics)
The 24h wait to see data is quite annoying, but we have to get used to it
Data Filters based on IP addresses: from my point of view, this is a completely useless function. It is more convenient to manage filters with other parameters and not with the IP address
The GA4 data stream can be linked to multiple domains, but a Search Console account can only be linked to one property. This is a limit that can be overcome only by using the paid version of Google Analytics 360
Other nice features that are in the official roadmap will arrive soon.
Here are the most awaited ones:
Session scope e product scope
Conversion metric
In conclusion, GA4 is a completely different “tool” from Google Universal Analytics you used to work with.
FAQ Are there any concerns or challenges related to the migration process to GA4?Yes, there are concerns about the migration process, including the need for reconfiguring metrics and reporting, the learning curve for understanding the new event-based model, and the limitations of the GA4 user interface.
Is GA4 recommended for all businesses, or are there specific cases where it may be more suitable?The suitability of GA4 depends on the specific needs and circumstances of each business. Some experts mention that GA4 may be more suitable for businesses with an app-first approach or those already familiar with the Firebase model and reporting.
What are the main concerns regarding GA4 according to the experts?The main concerns regarding GA4 mentioned by the experts include the UI and reporting limitations compared to Universal Analytics, documentation gaps, changes in naming and concepts of dimensions and metrics, and the need for feature parity with Universal Analytics.
SummaryTo sum up, the sunset of Universal Analytics has prepared some significant challenges for us. New changes are awaiting us in the nearest future, and we should be ready to face them with solid knowledge and newly gained skills.
We hope this knowledge-share post has shed some light on the current situation and inspired you to look for the optimal GA4 solution.
If you’ve decided to upgrade to Google Analytics 4, check out our handy guide and learn how!
Check out our comparison of Google Search Console vs Google Analytics 4 and find out where lies their strength!
Do Household Cleaners Make Kids Obese? Here’s Why It’s Too Soon To Tell.
More and more research these days suggests we’ve gone too far with our drive to eliminate germs, and that kids could use a little dirt and grime to improve their own health. A new paper published in the Canadian Medical Association Journal on Monday doubles down on that idea, and suggests that infant exposure to common household cleaners could give rise to obesity later in life. And it all comes back to one of modern medicine’s new favorite obsessions: the gut microbiome.
“You have this set of microbes in your gut that you’re dependent on, for a large number of things you take for granted,” says study coauthor James Scott, a researcher who studies environmental health at the University of Toronto. “It’s interesting to see that they’re subject to influences in your environment in ways you’ve never thought.”
The jury is still out on exactly how gut bacteria are actually capable of making us fat or keeping us thin. But the evidence is already here that environmental factors can affect our weight gain, and if gut bacteria do indeed play a role in how we burn fat and keep weight off, then it’s worth investigating what sorts of external factors mediate this relationship.
“The gut microbial community that establishes inside of us is very important in regulating how we use the foods that we eat,” says Scott. “When we eat stuff, we think that we’re feeding ourselves, but mostly we’re feeding microbes. And what the microbes do with it can cause us to use energy differently,” affecting how we store fat and how robustly our metabolism runs.
This is exacerbated by the fact that baby microbiomes are much more vulnerable to changes. “When a baby is born, there really isn’t much of a bacterial community in its gut,” says Scott. It can take up to a year for that bacteria to settle down and make a home for themselves in our intestines, and this process is affected by everything from illness and treatment with antibiotics, to feeding with formula or breastmilk.
“Once you have an adult microbiome, it’s a fairly durable, permanent thing,” says Scott. Relatively speaking, anyway—diet and environmental changes can cause shifts even in grown-ups. “But it’s very fragile over that first year of life, especially in the first 100 days.” Scott and his colleagues were interested in gauging whether household cleaners, often antagonistic to bacteria, could have a tangible impact on the gut flora living inside of us during this delicate period.
The latest findings are actually part of a much more extensive research project focused around the Canadian Healthy Infant Longitudinal Development (CHILD) birth cohort: a database of health, medical, and behavioral data taken from over 3,500 Canadian children starting when they were fetuses, originally to determine the environmental factors that lead to asthma and allergies in kids. Over the years, the study—now 10 years old and comprising more than 40 researchers—has grown tremendously in scope, creating opportunities to investigate childhood obesity.
For this particular study, Scott and his team collected and analyzed stool samples from 757 CHILD cohort infants when they were 3 to 4 months old to profile each baby’s gut microbiota. The team also took an inventory of each infant’s household for cleaning products to gauge how much those children were exposed to disinfectants and multipurpose chemicals, and also measured body mass index (BMI) at age 1 and age 3 as a measure of obesity.
Taken together, the team found that children living in homes where antimicrobial products were used at least weekly during infancy had higher BMIs at age 3 than children living in homes with less frequent disinfectant use and where cleaning products were billed as “eco-friendly” (i.e. non-antibacterial). These infants with higher BMIs possessed microbiota profiles that seemed to correspond with the findings, including lower levels of common gut bacteria like Haemophilus and Clostridium.
The researchers were particularly struck by the fact that children exposed to more disinfectants were twice as likely to have higher abundances of Lachnospiraceae, bacteria associated with higher rates of body fat and insulin resistance in animal studies. “We know it’s an important player in terms of what the mature microbiome will look like,” says Scott. Although there are hundreds and hundreds of species of bacteria living in our guts, Scott says only a small variety wield the greatest influence over what the microbiota eventually looks like.
But don’t get hasty and start throwing out your sprays and wet wipes. There are plenty of reasons to eye the findings with a bit of caution. “One of the drawbacks of our study,” says Scott, “is that we didn’t actually make physical measurements of chemical residues, either in the home environment by analyzing things like dust samples, or within the biology by looking at blood or urine samples.” The study hinges on an assumption that cleaning products in the home were used regularly, but without determining how extensively these products were used. “There may be a sub-population of individuals that we’re misclassifying, who have the products but aren’t using them.” And no distinction was made between specific ingredients, other than whether they were antibacterial or not.
Moreover, the data doesn’t factor in other important influencers of gut bacteria profiles, like diet, antibiotic use, the overweight status of the mother before pregnancy, and more.
Scott himself admits the results “are by no means demonstrative of a relationship” between cleaning products and obesity. There’s simply not enough that we understand about the biological mechanisms that encourage or attenuate gut flora like Lachnospiraceae. “The effect is a fairly weak effect, as far as we can tell. The relationship is significant, but I wouldn’t say they are strongly significant. And that significance doesn’t tell us anything about the nature of the relationship.”
Instead, Scott believes “the results really give us some tantalizing window into what’s happening, and raises the next, really interesting questions. It’s important for us to go back and tease out the mechanisms.” And he thinks the ongoing investigations with the CHILD data gives the researchers the foundation to move forward in follow-up studies.
Ultimately, until we have more data to work with, you should stick with the same sort of cleaning practices most experts promote: disinfecting things as need be, but refraining from going overboard and trying to turn your home into a haven of sterility. It’s never a bad thing for babies and toddlers to get a little dirty once in a while.
Opinion: With Iphones Offering 4K Video Recording, It’s Crunch Time For Apple’s Storage Tiers
It was estimated last year that Apple takes home a stunning 94% of all profits made by all players across the entire smartphone industry. Its gross margin across all products hovers around the 40% mark. Apple knows how to make money from its products.
One way it does this is to sell its iPhones with a base level of flash storage that is just barely usable, and charge a hefty markup for higher storage tiers. Sure, you can buy a shiny new iPhone 6s for $649, but that gets you a measly 16GB. It’ll cost you another hundred bucks to get a more reasonable 64GB and another $100 again if you want to max out at 128GB.
I’ve touched on this topic before as part of a more general piece about whether Apple was getting a little too greedy, but it seems to me that when the company is supplying iPhones with 4K camcorders built into them, this is the point at which a 16GB tier becomes completely indefensible …
The storage requirements for 4K video vary with the implementation. For the iPhone, Apple gives a figure of 375MB per minute. A 16GB iPhone gives you around 12GB of user-accessible space. That means that even if you didn’t install a single app, store a single song or take a single photo, your shiny new iPhone can shoot just 32 minutes of 4K video.
Anyone who has ever attempted even the most amateurish of edited videos will tell you that you need to shoot way, way more video than you’ll have in the finished edit. A ratio of 10:1 is probably not untypical. So you could shoot enough material for about a three minute video.
With a more realistic mix of apps, music and photos, well, forget it. Even if all you wanted to do was create a video of your kid’s birthday party, copying it to your Mac the same day to recover the space, you’re going to be struggling.
But even a 64GB phone isn’t going to cut it on vacation. Let’s be generous and say that once you’ve installed some apps, added a few tunes and taken some photos that you’ve got 32GB of it free. That gets you less than an hour and a half of video. The only way that’s going to work is if you take a MacBook with you, faithfully transfer your footage every day or two and then delete the video from your phone. Not quite the carefree vacation spirit, somehow.
There are two ways Apple could provide iPhone users with more realistic amounts of storage: one is happening too slowly, the second is almost – but not quite – unthinkable.
The first approach is to increase the storage tiers to more realistic levels. Apple made a small step in this direction when it moved from 16/32/64GB in the iPhone 5s to 16/64/128GB in the iPhone 6. But we still have that 16GB starting tier, and even 128GB is looking a little tight at a time when a single app can be up to 4GB in size.
I bought the 128GB iPhone 6 and then 6s because I didn’t ever want to think about storage. I like to have a decent amount of music on board because I don’t yet live in the wonderful cloud-based world Apple seems to think we all do. There are plenty of times when I have poor or zero mobile connectivity, whether it’s on an underground metro system, on a plane or just out in the sticks somewhere. Streaming all my music just isn’t realistic.
Right now, I have 66GB free with next to no video (I tend to delete them after transferring to my Mac). If I’d gone for the 64GB model, I’d have space for just a few GB of video. As it is, I could shoot a little under three hours of 4K video. Personally, I’m a photo guy not a video one, but if I were a video guy that would already be a little tight for a vacation.
Realistic tiers in the age of 4K video and 4GB apps would, I think, be 64/128/256GB.
The second approach Apple could take would be to offer an microSD card slot. It wouldn’t need to enable app storage on it, just music and video would be enough. That way, it can continue to sell a 16GB iPhone and anyone who wants more storage for the heavy lifting can add it themselves.
This is not entirely unthinkable. There’s a third-party accessory that allows you insert a microSD card, with a companion app providing the necessary access. You can pick one up for $50.
And Apple itself of course includes an SD card slot in some of its MacBooks (both Retina MacBook Pros, and the 13-inch MacBook Air). But that slot is really only intended to provide a convenient method of transferring photos and videos from standalone cameras.
You can use an SD card for additional storage, and there are even some products specifically designed to essentially create a DIY fusion drive, where the combination of MacBook storage and SD card appears as a single drive. The TarDisk starts from $148 for an extra 128GB. But I very much get the impression Apple hopes most people won’t realize that.
But a microSD card in an iPhone is almost unthinkable for four reasons.
First, it’s notable that there’s no SD card slot in the 12-inch MacBook, and that design, I think, represents the future of the MacBook range. Apple’s mantra today is that connectivity should be mostly wireless. The SD card slot, even in MacBooks, is on the way out.
Second, there’s the iOS/UI issue. iOS has only a limited concept of what files are, so to store them externally, you’d need some tweaks at least to accommodate them. This isn’t necessarily a big deal. Apple could, for example, treat it like Apple Music, where files stored on the card are shown greyed-out with a card icon next to them, and you have to insert the card to get access. But Apple has steadfastly resisted the idea of a visible filesystem in iOS, and that would take us some way in that direction.
Third, there’s the Jony Ive factor. Suggest to him that we break up his sleek designs with an extra port, and I think his response would probably comprise three words, the first being ‘no’ and the third being ‘way.’
Indeed, we’re already headed in the opposite direction, the 3.5mm headphone socket almost certainly disappearing next time around, and I also expect Apple to switch from a physical SIM card to a virtual one as soon as possible. iPhones are going to get fewer ports, not more.
Finally, and most persuasively of all, there’s the financial argument. Techies may realize they can buy a low-storage MacBook and increase the capacity with an SD card, but that would never occur to the average person on the street. If they need more MacBook storage, they pay the Apple tax for it. But add an SD card slot to an iPhone, and it would be big news in the mainstream press. Everyone would see that they could do it. Apple would be losing $100 or $200 of almost free margin on its higher-tier phones. That’s just not going to happen.
There is a third possibility: an iPhone Pro.
We started to see a small differentiation between the iPhone 6 and 6 Plus with the optical image stabilization in the larger model. For the iPhone 7, it’s rumored that the Plus model will get a dual-camera system while the smaller model won’t. I’ve suggested before that Apple may be heading down the route of increasing differentiation between the two models.
I don’t think the iPhone Plus will get a microSD card slot, but I could see a possibility that, at some point, Apple could rebrand it as the iPhone Pro. That would justify a significant difference in specs, and it may then be that this model gets 64/128/256GB storage tiers.
To anyone complaining that there’s not enough storage in the standard iPhone for 4K video, Apple could point to the iPhone Pro and say ‘that’s the model we recommend for those who are serious about video.’
What do you think Apple should do? Are you happy enough with the existing tiers? Do you want more? Should Apple add a microSD card slot? And could the iPhone Pro be the way to go?
This piece was rather a team effort, with input from Benjamin, Greg, Jordan, Seth and Zac.
FTC: We use income earning auto affiliate links. More.
We Finally Know Why The Betelgeuse Star Dimmed—And It’s Not What You Think
When buzz started to circulate among astronomers that the nearby red supergiant Betelgeuse was fading in November of 2023, astrophysicist Miguel Montargès doubted anything unusual was going on. After all, the star, which forms the beefy shoulder of the constellation Orion the Hunter, does have regular dimming cycles. But as Betelgeuse’s brightness plummeted, he couldn’t ignore the apparent anomaly. So he set out to prove that the standard cycles had aligned by chance, and sought out a team and instrument capable of zooming in on our friendly neighborhood red supergiant.
“My goal was to take an image with [Chile’s Very Large Telescope] to show that the star was as it is normally,” says Montargès, who currently works as an astronomer at the Observatory of Paris. “Of course, I was completely wrong.”
Meanwhile, rumors spread like wildfire. Was the elderly Betelgeuse’s sputtering a sign of an impending and spectacular supernova? Such exploding stars are a rare sight, observed just five times in our galaxy—and never at such close range. Astronomers didn’t think so, but they couldn’t deny that the red supergiant was up to something unusual. Its brightness eventually fell by roughly two-thirds before rebounding—an all-time low since the beginning of modern measurements. Even naked-eyed stargazers spotted the dimming.
Betelgeuse is usually the brightest star in the Orion constellation. But astronomers noticed some serious dimming between 2023 and 2023. Credit: Emily M. Levesque
Two years later, Betelgeuse remains in the night sky, whole and undiminished. So what really went down in that corner of Orion, hundreds of light years away? After conducting observations from one of the world’s most powerful telescopes— and painstaking theoretical work sifting through thousands of theories—Montargès and his team have figured out the most likely explanation. Their conclusions provide the best answer the astronomical community is likely to get for this head—er, shoulder-scratcher.
“This is the most comprehensive picture of what caused the ‘Great Dimming’ that we’ve seen,” said Emily Levesque, an astronomer at the University of Washington, who was not involved in the research. “It gives us a really nice, clear picture of what happened to Betelgeuse.”
A world class zoomAs Betelgeuse faded before astronomers’ eyes, ideas abounded for what the culprit could be. Maybe something was simply obscuring the star. It had no known binary twin, so perhaps an interstellar dust cloud had drifted in front. Or some portion of the expansive dust cloud surrounding the aging star could have thickened up. Perhaps the star itself might have cooled.
To solve the puzzle, the team turned to one of Earth’s highest-powered zoom lenses: an instrument at the Very Large Telescope known as SPHERE (short for Spectro-Polarimetric High-contrast Exoplanet Research).
In most astronomical instruments, Betelgeuse shows up as a single-pixel speck surrounded by a billowing dust cloud. But with SPHERE, the researchers could zoom in far enough to see the star as the round object it really is. This up-close perspective let them observe not only the fact that Betelgeuse was dimming, but also where it was dimming.
Montargès and his colleagues used SPHERE to take a series of images, building on one snapshot he happened to take in 2023 before the fading began. The group followed up with another during the December dimming, and two more in January and March of 2023. Capturing the complete sequence came as a stroke of luck. The first image was taken on the second of a two-night observation window, after clouds had scuttled their first attempt. “It was really a miracle that we got the [pre-dimming] image in the end,” says Emily Cannon, a coauthor from the Catholic University of Leuven in Belgium.
Perhaps equally miraculous, the team got the final image on VLT’s last day of observations before the pandemic shut it down
The hard-won snapshots told a clear story: Betelgeuse’s bottom half had dimmed and stayed dark throughout the episode, ruling out a quickly passing interstellar interloper.
14,000 guesses, one answerIn the months following the event, two leading hypotheses emerged, both supported by different observations. The first was that a nearby patch of dust, part of the star’s natural veil, seemed to have obscured its bright surface. The second, a cool spot also seemed to have formed. Montargès and his team considered thousands of models for different ways the dust and cold spot could have played out, about 14,000 in all, searching for the theory that best matched SPHERE’s images. They eventually concluded that the two effects had worked together to darken the star.
Here’s what they think happened. About a year before the Great Dimming, Betelgeuse let out a giant belch of gas, releasing a cloud of hydrogen and other atoms. Then, by chance, a giant swath of the stellar surface cooled down. Convection roils all stars, as hotter material rises and forms quickly cooling bubbles on the surface—a popcorn-like phenomenon astronomers recently observed on our own sun.
[Related: These close-up photos of the sun could help us forecast space weather]
In a puffy red supergiant like Betelgeuse, these cells can cover up to a quarter of the star’s surface, sending a chill out into space. The cold snap gave the gas cloud a chance to settle down and combine into gritty molecules of dust. That sooty cloud then blocked the star’s light from reaching Earth. The group detailed their findings on June 16 in Nature.
A cool spot likely gave atoms the chance to gather into dust molecules, blocking the star. Credit: ESO/L. Calçada
“People had proposed pieces of this, and this Nature paper does an amazing job of putting them all together,” says Levesque, who wrote an accompanying analysis.
A nearby stellar testbedWhile Betelgeuse isn’t ready to school modern astronomers in the finer details of supernovae just yet, the Great Dimming incident is helping them understand the final act of stars at least eight times the mass of our sun.
“Betelgeuse gave us this amazing nearby testbed for studying red supergiants that hopefully we can now turn around and apply to these stars as a whole,” Levesque says.
Theorists have known that cool patches and gassy burps are likely to be common, but getting a close look at how these behaviors interact can clarify how stellar winds launch from star surfaces.
Stellar winds are responsible for spreading the heavy atoms that form planets and people throughout the cosmos, as well for determining whether these heavy stars will have enough heft left over at the end of their lives to collapse into a black hole (Betelgeuse will likely end up as a neutron star, but it’s on the cusp of blackhole-dom).
[Related: Why are big neutron stars like Tootsie Pops?]
Betelguese sits much closer to Earth than other red supergiants, so astronomers can see just clearly enough to really figure out what’s happening when it acts up. In fact, the nearby star shines so brightly, its fiery rays can easily damage delicate astronomical equipment “Your biggest fear with Betelgeuse is that you’ll actually burn the detector,” Cannon says.
The dramatic explosion many hoped for is still coming, but exactly when is anyone’s guess. It could arrive any day in the next 100,000 years or so. Montargès, for his part, feels sentimental about Betelgeuse, the first star he learned to identify by name as a child. He looks forward to studying the red supergiant for the rest of his career.“
When I am a 70-year-old, perhaps I’m fine [with it exploding as a supernova],” he says. “Or 80. When I’m an 80-year-old it’s fine.”
Update the detailed information about Why It’s Time To Take Linkedin Seriously on the Bellydancehcm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!