You are reading the article Nft100 Spotlight: Julie Pacino’s Web3 Filmmaking Vision updated in November 2023 on the website Bellydancehcm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Nft100 Spotlight: Julie Pacino’s Web3 Filmmaking Vision
Julie Pacino is leading innovation at the forefront of Web3 filmmaking, and it’s not even close. Her upcoming feature-length debut, I Live Here Now, evolved out of a 2023 photography NFT collection of the same name, and the film is largely being funded by Pacino’s follow-up to that series, Keepers of the Inn. And having recently announced a deal with crypto payments platform MoonPay to co-executive produce the film, it’s become irrefutably clear that few others in the NFT community are maneuvering as deftly as Pacino to push creative freedom forward.
In recognition of Pacino’s inclusion in the 2023 edition of the NFT100, we sat down with the producer, photographer, and filmmaker to talk about her partnership with MoonPay, how she’s using Web3 tech to interact and create with her community, and what fans and collectors can expect this year.
Redefining the creator-fan relationshipAccessibility is a primary value for Pacino. A veteran in the space, she has never lost her sense of appreciation for how figures in the NFT community have welcomed her questions and curiosity with open arms whenever she needed to learn about something.
That accessibility is something Pacino wants to relay to her fans.
Pacino on set. Credit: Chiara Alexa
Thinking of ways to bring her community into the creative decision-making process is nothing new for Pacino. Both I Live Here Now and Keepers of the Inn are collections that feed into the larger narrative of her upcoming film. As Pacino slowly began to form the characters and stories from the images in those series, she had developed a plan for her collectors in which they’d be able to log in to a token-gated website and vote on creative decisions for the film.
While she and the community ultimately decided to go a different route with the token-gated website, Pacino notes that individuals in her community regularly reach out to her with interest in helping not only steer the film’s creative course but in realizing its production as well.
“Just last week, I had a two-hour conversation with one of our community members,” Pacino recalled. “We were brainstorming about themes in my movie because I’m in the process of a bit of a rewrite. That was just invaluable to me because this individual is really familiar with my work and understands my photography and the cinematic feel that I want my movie to have. So, it was like having a great soundboard to bounce those ideas off of.”
Shooting for the moonThe story of I Live Here Now revolves around a woman who goes to a mysterious hotel to process childhood traumas. There, she encounters several intriguing characters, and mystery and psychological horror ensue.
One of Web3’s appeals, Pacino says, is its ability to expand storytelling beyond what can be accomplished within the boundaries of a 90-minute film. This is why she plans to have IRL activations — possible events she describes as a blend between participatory theater show Sleep No More and Burning Man — to amplify the experience for collectors and filmgoers. Such plans are only encouraged and made more possible by Pacino’s co-executive producer, MoonPay.
Behind the scenes for I Live Here Now. Credit: Chiara Alexa
“Partnering with MoonPay opens up an entire network, and the infrastructure that they offer is going to be really beneficial for the project when it’s time to build out all of that,” Pacino said. “It was a natural fit.”
Pacino is grateful for the way Web3 and NFTs have allowed her to engage with her artistry on her terms. The feature-length film was partly funded by I Live Here Now’s follow-up collection, Keepers of the Inn, a project that Pacino says changed her life.
Her mission now, she says, is to prove that this model of filmmaking is viable. To do this, Pacino aims to demonstrate that Web3 tech enables creatives to go beyond crowdfunding, using I Live Here Now as an example of what can be accomplished with community and how NFTs can be used to enrich the movie-going experience.
“I found success in Web3 by showing up every day as myself.”
Julie Pacino
The characters in I Live Here Now reflect Pacino’s love of writing themes regarding womanhood and sexuality, topics that she says the traditional film industry isn’t always receptive toward.
“I found success in Web3 by showing up every day as myself and being very open about my sexuality and my art and my interest. […] I love writing women characters that are dimensional and have flaws but that are also badass and oftentimes times doing things that you would conventionally see a man doing in a film, whether that’s shooting a gun or being sexually liberated.”
What 2023 has in storePacino has a lot on her plate for the year. Apart from finishing and releasing her first feature-length film, she has plans to produce a new NFT series called Hollywood Mornings. The collection, Pacino says, will be an intimate portrait series that could include “some familiar faces.”
And while she has yet to reveal an official release date for the film, fans can look forward to a “moving picture story” from Pacino within the next two weeks that holders of NFTs from the Keepers of the Inn collection can access.
“I don’t like to predict a release date, just because I’m religious about the creative process,” Pacino elaborated. “It will be finished when it’s supposed to be finished. However, within the next eight to 12 months, people will be seeing a lot more from the movie than is currently available right now.”
Want more NFT100 honoree interviews? Get the full list of everyone we spoke with here.
You're reading Nft100 Spotlight: Julie Pacino’s Web3 Filmmaking Vision
Computer Vision Tutorial: A Step
19 minutes
⭐⭐⭐⭐⭐
Rating: 5 out of 5.
IntroductionWhat’s the first thing you do when you’re attempting to cross the road? We typically look left and right, take stock of the vehicles on the road, and make our decision. Our brain is able to analyze, in a matter of milliseconds, what kind of vehicle (car, bus, truck, auto, etc.) is coming towards us. Can machines do that?
Now, there are multiple ways of dealing with computer vision challenges. The most popular approach I have come across is based on identifying the objects present in an image, aka, object detection. But what if we want to dive deeper? What if just detecting objects isn’t enough – we want to analyze our image at a much more granular level?
As data scientists, we are always curious to dig deeper into the data. Asking questions like these is why I love working in this field!
In this article, I will introduce you to the concept of image segmentation. It is a powerful computer vision algorithm that builds upon the idea of object detection and takes us to a whole new level of working with image data. This technique opens up so many possibilities – it has blown my mind.
What Is Image Segmentation?Let’s understand image segmentation using a simple example. Consider the below image:
There’s only one object here – a dog. We can build a straightforward cat-dog classifier model and predict that there’s a dog in the given image. But what if we have both a cat and a dog in a single image?
We can train a multi-label classifier, in that instance. Now, there’s another caveat – we won’t know the location of either animal/object in the image.
That’s where image localization comes into the picture (no pun intended!). It helps us to identify the location of a single object in the given image. In case we have multiple objects present, we then rely on the concept of object detection (OD). We can predict the location along with the class for each object using OD.
Before detecting the objects and even before classifying the image, we need to understand what the image consists of. Enter – Image Segmentation.
How Does Image Segmentation Work?We can divide or partition the image into various parts called segments. It’s not a great idea to process the entire image at the same time as there will be regions in the image which do not contain any information. By dividing the image into segments, we can make use of the important segments for processing the image. That, in a nutshell, is how image segmentation works.
An image is a collection or set of different pixels. We group together the pixels that have similar attributes using image segmentation. Take a moment to go through the below visual (it’ll give you a practical idea of image segmentation):
Source : cs231n.stanford.edu
Object detection builds a bounding box corresponding to each class in the image. But it tells us nothing about the shape of the object. We only get the set of bounding box coordinates. We want to get more information – this is too vague for our purposes.
Image segmentation creates a pixel-wise mask for each object in the image. This technique gives us a far more granular understanding of the object(s) in the image.
Why do we need to go this deep? Can’t all image processing tasks be solved using simple bounding box coordinates? Let’s take a real-world example to answer this pertinent question.
What Is Image Segmentation Used For?The shape of the cancerous cells plays a vital role in determining the severity of the cancer. You might have put the pieces together – object detection will not be very useful here. We will only generate bounding boxes which will not help us in identifying the shape of the cells.
Image Segmentation techniques make a MASSIVE impact here. They help us approach this problem in a more granular manner and get more meaningful results. A win-win for everyone in the healthcare industry.
Source: Wikipedia
Here, we can clearly see the shapes of all the cancerous cells. There are many other applications where Image segmentation is transforming industries:
Traffic Control Systems
Self Driving Cars
Locating objects in satellite images
Different Types of Image SegmentationWe can broadly divide image segmentation techniques into two types. Consider the below images:
Can you identify the difference between these two? Both the images are using image segmentation to identify and locate the people present.
In image 1, every pixel belongs to a particular class (either background or person). Also, all the pixels belonging to a particular class are represented by the same color (background as black and person as pink). This is an example of semantic segmentation
Image 2 has also assigned a particular class to each pixel of the image. However, different objects of the same class have different colors (Person 1 as red, Person 2 as green, background as black, etc.). This is an example of instance segmentation
Let me quickly summarize what we’ve learned. If there are 5 people in an image, semantic segmentation will focus on classifying all the people as a single instance. Instance segmentation, on the other hand. will identify each of these people individually.
So far, we have delved into the theoretical concepts of image processing and segmentation. Let’s mix things up a bit – we’ll combine learning concepts with implementing them in Python. I strongly believe that’s the best way to learn and remember any topic.
Region-based SegmentationOne simple way to segment different objects could be to use their pixel values. An important point to note – the pixel values will be different for the objects and the image’s background if there’s a sharp contrast between them.
In this case, we can set a threshold value. The pixel values falling below or above that threshold can be classified accordingly (as an object or the background). This technique is known as Threshold Segmentation.
If we want to divide the image into two regions (object and background), we define a single threshold value. This is known as the global threshold.
If we have multiple objects along with the background, we must define multiple thresholds. These thresholds are collectively known as the local threshold.
Let’s implement what we’ve learned in this section. Download this image and run the below code. It will give you a better understanding of how thresholding works (you can use any image of your choice if you feel like experimenting!).
First, we’ll import the required libraries.
View the code on Gist.
Let’s read the downloaded image and plot it:
View the code on Gist.
It is a three-channel image (RGB). We need to convert it into grayscale so that we only have a single channel. Doing this will also help us get a better understanding of how the algorithm works.
Python Code:
Now, we want to apply a certain threshold to this image. This threshold should separate the image into two parts – the foreground and the background. Before we do that, let’s quickly check the shape of this image:
gray.shape(192, 263)
The height and width of the image is 192 and 263 respectively. We will take the mean of the pixel values and use that as a threshold. If the pixel value is more than our threshold, we can say that it belongs to an object. If the pixel value is less than the threshold, it will be treated as the background. Let’s code this:
View the code on Gist.
Nice! The darker region (black) represents the background and the brighter (white) region is the foreground. We can define multiple thresholds as well to detect multiple objects:
View the code on Gist.
Calculations are simpler
Fast operation speed
When the object and background have high contrast, this method performs really well
But there are some limitations to this approach. When we don’t have significant grayscale difference, or there is an overlap of the grayscale pixel values, it becomes very difficult to get accurate segments.
Edge Detection SegmentationWhat divides two objects in an image? There is always an edge between two adjacent regions with different grayscale values (pixel values). The edges can be considered as the discontinuous local features of an image.
We can make use of this discontinuity to detect edges and hence define a boundary of the object. This helps us in detecting the shapes of multiple objects present in a given image. Now the question is how can we detect these edges? This is where we can make use of filters and convolutions. Refer to this article if you need to learn about these concepts.
The below visual will help you understand how a filter colvolves over an image :
Here’s the step-by-step process of how this works:
Take the weight matrix
Put it on top of the image
Perform element-wise multiplication and get the output
Move the weight matrix as per the stride chosen
Convolve until all the pixels of the input are used
One such weight matrix is the sobel operator. It is typically used to detect edges. The sobel operator has two weight matrices – one for detecting horizontal edges and the other for detecting vertical edges. Let me show how these operators look and we will then implement them in Python.
Sobel filter (horizontal) =
121000-1-2-1
Sobel filter (vertical) =
-101-202-101
Edge detection works by convolving these filters over the given image. Let’s visualize them on this article.
View the code on Gist.
It should be fairly simple for us to understand how the edges are detected in this image. Let’s convert it into grayscale and define the sobel filter (both horizontal and vertical) that will be convolved over this image:
View the code on Gist.
Now, convolve this filter over the image using the convolve function of the ndimage package from scipy.
View the code on Gist.
Let’s plot these results:
View the code on Gist. View the code on Gist.
Here, we are able to identify the horizontal as well as the vertical edges. There is one more type of filter that can detect both horizontal and vertical edges at the same time. This is called the laplace operator:
1111-81111
Let’s define this filter in Python and convolve it on the same image:
View the code on Gist.
Next, convolve the filter and print the output:
View the code on Gist.
Here, we can see that our method has detected both horizontal as well as vertical edges. I encourage you to try it on different images and share your results with me. Remember, the best way to learn is by practicing!
Clustering-based Image SegmentationThis idea might have come to you while reading about image segmentation. Can’t we use clustering techniques to divide images into segments? We certainly can!
In this section, we’ll get an an intuition of what clustering is (it’s always good to revise certain concepts!) and how we can use of it to segment images.
Clustering is the task of dividing the population (data points) into a number of groups, such that data points in the same groups are more similar to other data points in that same group than those in other groups. These groups are known as clusters.
K-means Clustering
One of the most commonly used clustering algorithms is k-means. Here, the k represents the number of clusters (not to be confused with k-nearest neighbor). Let’s understand how k-means works:
First, randomly select k initial clusters
Randomly assign each data point to any one of the k clusters
Calculate the centers of these clusters
Calculate the distance of all the points from the center of each cluster
Depending on this distance, the points are reassigned to the nearest cluster
Calculate the center of the newly formed clusters
Finally, repeat steps (4), (5) and (6) until either the center of the clusters does not change or we reach the set number of iterations
Let’s put our learning to the test and check how well k-means segments the objects in an image. We will be using this image, so download it, read it and and check its dimensions:
View the code on Gist.
It’s a 3-dimensional image of shape (192, 263, 3). For clustering the image using k-means, we first need to convert it into a 2-dimensional array whose shape will be (length*width, channels). In our example, this will be (192*263, 3).
View the code on Gist.
(50496, 3)
We can see that the image has been converted to a 2-dimensional array. Next, fit the k-means algorithm on this reshaped array and obtain the clusters. The cluster_centers_ function of k-means will return the cluster centers and labels_ function will give us the label for each pixel (it will tell us which pixel of the image belongs to which cluster).
View the code on Gist.
I have chosen 5 clusters for this article but you can play around with this number and check the results. Now, let’s bring back the clusters to their original shape, i.e. 3-dimensional image, and plot the results.
View the code on Gist.
Amazing, isn’t it? We are able to segment the image pretty well using just 5 clusters. I’m sure you’ll be able to improve the segmentation by increasing the number of clusters.
k-means works really well when we have a small dataset. It can segment the objects in the image and give impressive results. But the algorithm hits a roadblock when applied on a large dataset (more number of images).
It looks at all the samples at every iteration, so the time taken is too high. Hence, it’s also too expensive to implement. And since k-means is a distance-based algorithm, it only applies to convex datasets and is unsuitable for clustering non-convex clusters.
Finally, let’s look at a simple, flexible and general approach for image segmentation.
Mask R-CNNData scientists and researchers at Facebook AI Research (FAIR) pioneered a deep learning architecture, called Mask R-CNN, that can create a pixel-wise mask for each object in an image. This is a really cool concept so follow along closely!
Mask R-CNN is an extension of the popular Faster R-CNN object detection architecture. Mask R-CNN adds a branch to the already existing Faster R-CNN outputs. The Faster R-CNN method generates two things for each object in the image:
Its class
The bounding box coordinates
Mask R-CNN adds a third branch to this which outputs the object mask as well. Take a look at the below image to get an intuition of how Mask R-CNN works on the inside:
Source: arxiv.org
We take an image as input and pass it to the ConvNet, which returns the feature map for that image
Region proposal network (RPN) is applied on these feature maps. This returns the object proposals along with their objectness score
A RoI pooling layer is applied on these proposals to bring down all the proposals to the same size
Finally, the proposals are passed to a fully connected layer to classify and output the bounding boxes for objects. It also returns the mask for each proposal
Mask R-CNN is the current state-of-the-art for image segmentation and runs at 5 fps.
Summary of Image Segmentation TechniquesI have summarized the different image segmentation algorithms in the below table.. I suggest keeping this handy next time you’re working on an image segmentation challenge or problem!
AlgorithmDescriptionAdvantagesLimitationsRegion-Based SegmentationSeparates the objects into different regions based on some threshold value(s).a. Simple calculations
b. Fast operation speed
c. When the object and background have high contrast, this method performs really well
When there is no significant grayscale difference or an overlap of the grayscale pixel values, it becomes very difficult to get accurate chúng tôi Detection SegmentationMakes use of discontinuous local features of an image to detect edges and hence define a boundary of the chúng tôi is good for images having better contrast between chúng tôi suitable when there are too many edges in the image and if there is less contrast between objects.Segmentation based on ClusteringDivides the pixels of the image into homogeneous clusters.Works really well on small datasets and generates excellent clusters.a. Computation time is too large and expensive.
b. k-means is a distance-based algorithm. It is not suitable for clustering non-convex clusters.
Mask R-CNNGives three outputs for each object in the image: its class, bounding box coordinates, and object maska. Simple, flexible and general approach
b. It is also the current state-of-the-art for image segmentation
High training time
ConclusionThis article is just the beginning of our journey to learn all about image segmentation. In the next article of this series, we will deep dive into the implementation of Mask R-CNN. So stay tuned!
I have found image segmentation quite a useful function in my deep learning career. The level of granularity I get from these techniques is astounding. It always amazes me how much detail we are able to extract with a few lines of code. I’ve mentioned a couple of useful resources below to help you out in your computer vision journey:
Frequently Asked QuestionsQ1. What are the different types of image segmentation?
A. There are mainly 4 types of image segmentation: region-based segmentation, edge detection segmentation, clustering-based segmentation, and mask R-CNN.
Q2. What is the best image segmentation method?
A. Clustering-based segmentation techniques such as k-means clustering are the most commonly used method for image segmentation.
Q3. What is image segmentation?
A. Image segmentation is the process of filtering or categorizing a database of images into classes, subsets, or regions based on certain specific features or characteristics.
Related
What Are The Challenges Women Face In Web3 Industry?
Women face challenges in the Web3 industry, including a lack of representation and bro culture
Web3 has created new opportunities for individuals to participate in a more open, transparent internet while also creating wealth. Women, however, face unique challenges in the Web3 industry, despite the potential benefits of this emerging technology. Women face significant obstacles that can make it difficult for them to thrive in the decentralized web, ranging from a lack of diversity in the industry to gender bias in funding. Continue reading this article know about the challenges women, face in the web3 industry.
Cointelegraph interviewed several challenges women face in the Web3 industry to better understand these challenges. Devon Martens, the principal blockchain engineer at Sweet NFTs, observed that the cryptocurrency industry, like many other technology and financial sectors, is dominated by men.
Martens stated that when she examines new Web3 companies and their management, she rarely sees women in the C-suite, noting: “It is hard to pursue something as a concept and feels a little more realistic when you see people in those roles already. That is why it is so important to talk about what we can do to cultivate talent across the board, including encouraging women to get into the space.”
Similarly, Sandy Carter, chief operating officer and head of business development at Unstoppable Domains, stated that women make up only 12.7% of the Web3 workforce, emphasizing the need for greater industry diversity. According to her observations, there is a significant gender gap among job applicants. Carter revealed: “When I announced I was joining Unstoppable Domains, I included a link to apply for another role at the company. Out of over 1,500 applications for that job, only 3% of the total applicants were women, and this stuck with me.”
Briana Marbury, CEO of the Interledger Foundation, addressed the issue of gender stereotypes in the crypto industry, noting that the industry is frequently perceived as being dominated by men and characterized by a strong “bro culture” that is unwelcoming to anyone who does not fit into the “pale and male” demographic. Unfortunately, this stereotype may discourage women from entering space. Marbury went on to say:
“People, women especially, often self-deselect themselves from pursuing potentially lucrative, rewarding and purposeful career pathways in crypto — or technology more broadly — because they believe ‘it’s not for people like them.’ Intentionality is key here. There needs to be a lot of intention in the crypto space in shifting old tropes into new and inclusive narratives.”
Diversity is critical in technology development, according to Daniela Barbosa, executive director of the Hyperledger Foundation. “Study after study shows that diversity in technology creation produces more robust technologies and some better outcomes — that diverse communities are simply stronger communities,” she said. She did, however, acknowledge that exclusionary behaviors can have an impact on community cultures and that this is a challenge in the crypto industry.
Barbosa emphasized that the crypto industry places a strong emphasis on developers and finance/traders, two communities where women are still underrepresented. “I still see a lot of toxic behavior in crypto, which includes aggressive language and insinuation towards specific groups or individuals,” she said. This toxic behavior can discourage women from entering the industry, creating a double-whammy challenge for gender inclusivity in the blockchain and cryptocurrency space.
Color Picker Application Using Computer Vision
Here arise one question because the one who is familiar with the OpenCV/is well aware that to show the image we ideally use cv2.imshow function but just to make one thing clear I’m personally a fan of Jupiter notebook and here to see the results cv2.imshow function won’t work it will just crash that kernel also when you will search about this issue you will find out that using the cv2.imshow function is meaningless to use in the client-side server (Jupiter notebook) hence we use matplotlib (plt.imshow) to plot the result in the form of an image.
Let’s declare some global variables which will be accessible along with the whole code.
flag_variable = False red_channel = g_channel = b_channel = x_coordinate = y_coordinate = 0Code-breakdown:
Then we have red, green, and blue channels (RGB) along with that the X and Y coordinate which for now is set to 0 but as soon as we will move around the image and pick the colors from it then these values will get changed.
Now we will read the color CSV file and give the heading name to every column.
heading = ["Color", "Name of color", "Hexadecimal code", "Red channel", "Green channel", "Blue channel"] color_csv = pd.read_csv('colors.csv', names=heading, header=None)Code-breakdown:
We are setting the name of headings that the color CSV file will have.
Then we will be reading the color.csv file with the help of the read_csv function.
Note: So, this color CSV file has the name, hexadecimal code, RGB values of the color we will be comparing the values from this CSV file only.
Now, we will create the function to get the name of the color (get_color_name).
def get_color_name(Red, Green, Blue): minimum = 10000 for i in range(len(color_csv)): distance = abs(Red - int(color_csv.loc[i, "Red channel"])) + abs(Green - int(color_csv.loc[i, "Green channel"])) + abs(Blue - int(color_csv.loc[i, "Blue channel"])) if distance <= minimum: minimum = distance color_name = color_csv.loc[i, "Name of color"] return color_nameCode-breakdown:
So, here first we are setting the threshold value to 10000 i.e. minimum threshold distance between the actual color code and the one which we got while selecting the color from the image.
Then we have calculated the distance of the color code from the image.
Now we will just see that the distance that we have calculated should be less than or equal to the threshold distance.
At last, we will store the name of the color from the CSV file and return it.
Now we will create the function to get the coordinates (draw_function)
def draw_function(event, x_coordinate, y_coordinate, flags, parameters): if event == cv2.EVENT_LBUTTONDBLCLK: global b, g, r, x_position, y_position, flag_variable flag_variable = True x_position = x_coordinate y_position = y_coordinate b, g, r = test[y_coordinate, x_coordinate] b = int(b) g = int(g) r = int(r)Code-breakdown:
Then comes the main part of the function in which we will store the values of coordinates and their corresponding RGB values in the global variables.
At the last, we will just convert the values to integer type using int().
cv2.namedWindow('image') cv2.setMouseCallback('image', draw_function) while True: cv2.imshow("image", test) if flag_variable: cv2.rectangle(test, (20, 20), (750, 60), (b, g, r), -1) text = get_color_name(r, g, b) + ' R=' + str(r) + ' G=' + str(g) + ' B=' + str(b) cv2.putText(test, text, (50, 50), 2, 0.8, (255, 255, 255), 2, cv2.LINE_AA) cv2.putText(test, text, (50, 50), 2, 0.8, (0, 0, 0), 2, cv2.LINE_AA) flag_variable = False if cv2.waitKey(20) & 0xFF == 27: break cv2.destroyAllWindows()Output:
Source: Author
Source: Author
Code-breakdown:
In the main logic, firstly we will create the rectangle(filled: -1 is used to fill the rectangle) on which we will have our text.
Now we will have the string of our text which will have a color code of RGB.
Then with the help of the put text method, we will show the text just above the rectangle that we have previously drawn.
We have our validation if the color is light, then we will display the text string in black color.
At last, we will have the option to quit the application with the escape key i.e 27.
Summary
First, we have imported all the libraries.
Then we loaded and plot the selected image.
Then we have given the headings to our color CSV file.
Then the app loop to execute all the steps.
Thus, by implementing the above steps, we can develop a Color picker applications using computer vision.
EndnotesRead on AV Blog about various predictions using Machine Learning.
About MeAlong with full-time work, I’ve got an immense interest in the same field, i.e. Data Science, along with its other subsets of Artificial Intelligence such as Computer Vision, Machine Learning, and Deep learning; feel free to collaborate with me on any project on the domains mentioned above (LinkedIn).
Hope you liked my article on Heart Disease Prediction? You can access my other articles, which are published on Analytics Vidhya as a part of the Blogathon link.
The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.
B2B Technology Marketing And Media – Market Spotlight: China
B2B Technology Marketing and Media – Market Spotlight: China Jon Panker
Share This Post
An interview with Shirley Xie – Country Manager, TechTarget ChinaIn this 2023 Market Spotlight series we’ll be giving in-depth interviews with regional marketers on the current state of B2B technology marketing in countries around the world. This series is meant to provide regional background understanding of unfamiliar markets that you, as a marketer, might be purchasing or working within.
Can you give a current market landscape? What is going on today in the market?You might have already heard a lot about China economic slowdown. It’s true that there are many economic uncertainties and challenges. The stock market is down; investments and growth have slowed. However, let’s put things in perspective. China’s GDP growth target this year will still be 6%. A lot of countries would love to be anywhere close to that type of growth. Much of the local growth is spurred by infrastructure investment. Worldwide IT spending is forecast to decline in 2023, according to Gartner and IDC. However, our recent survey on 2023 IT priorities in China shows that 58% of respondents will increase tech spending in 2023 compared to 2023. Big data and cloud computing, which have triggered changes in IT infrastructures and operations, will lead the charge.
And how has Chinese publishing changed over the last 5 years?Radically! Digitization is at the heart of the change. Traditional print media is still hanging on – but barely and with dramatic revenue declines. The rise of the Web and proliferation of mobile devices has ushered in a new era of more niche and in-depth content. The old notion of a portal homepage overwhelmed with ludicrous amounts of Chinese characters and graphics all on one seemingly never-ending page has become far less common. Publishers have moved towards higher quality content and user- friendly navigation.
The unique landscape of China social media (the market has no Facebook, Twitter, Instagram or Snapchat thanks to the Great Firewall) also caused great changes in Chinese publishing. WeChat, an instant messaging app owned by Tencent that launched in 2011, is the dominant social media player in China. It allows companies and individuals to sign up an official account to publish content and engage with users directly. According to the official WeChat Impact Report published by Tencent in January 2023, monthly active users of WeChat reached 468 million. Nearly every company has a WeChat official account to communicate with its target audience. In fact, some influential individuals have gained as many as 200K followers within a year and generated more revenue than a mid-size publishing company. Clearly WeChat has shaken up the traditional publishing industry. However, it’s important to note that for serious B2B IT research unbiased information websites remain critical. It’s tough to consume long-form, pre-purchase content on a mobile device.
Who are the current B2B IT media companies in the market? How has this changed?There aren’t too many global players still investing in the China enterprise IT market. IDG used to be the leader in IT publishing. It brought the first foreign IT publication to China. But IDG’s China team has shifted its core business to the venture capital market. CBSi sold its subsidiaries in China last year. ZDnet China is now owned by local publishers and is very news-oriented with a B2B/B2C mix. The same could be said for another local player, Chinabyte. Others like CSDN and 51CTO have big audiences that mainly consist of developers and programmers. TechTarget sits in a unique position within the market. Our content is 100% focused on solving IT problems and supporting IT purchases. Our editors handpick the best global TechTarget pieces to translate and then supplement that content with locally generated insight from the China tech market. Our audiences come to our network with a clear mission of leveraging technology to better support their businesses.
How does Chinese culture affect marketing spend?“Guanxi” (which translates to “relationship”) still plays an important role here in China. Face-to-face meetings and building out a network of trusted colleagues is the key to getting things done. Offline events remain popular, but increasingly Chinese tech buyers will have done significant pre-purchase research online prior to attending those events. They’re combining independent content consumption with personal connections. That’s why successful China marketers are right sizing their investment in events and ensuring they also have a strong digital footprint.
What are the current B2B IT buying trends and priorities in the market?Economic pressure has increased the focus on ROI. And IT buyers are now a bit more cost conscious. However, we’ve seen that once an IT project is identified, the buying cycle moves pretty fast, especially for the local private-owned companies as opposed to state-owned enterprises.
Data center consolidation, big data and mobility are the top 3 IT priorities according to our 2023 IT priorities survey:
As China’s digital economy grew rapidly, data centers sprung up. It’s estimated there are 400,000 data centers in the country, but some are falling behind international standards for energy efficiency. The China government has issued a set of guidelines calling for green data centers, which is the big push for data center consolidation initiatives. Organizations are being forced to explore how to control the high costs of energy use and old hardware and how to optimize data center management.
Like other markets, big data is also driving IT spend, especially in the finance and e-commerce industries. Last year, the government announced its national Big Data strategy aimed at improving public data sharing and management. We’re seeing this market move from strategy discussions to actual implementations.
Finally, there’s mobility. This is a country with 1.3 billion mobile users. According to IDC, China’s enterprise mobility service market reached USD $685 million in 2014 and will grow rapidly in the next several years. Our research finds that within the enterprise the deployment of mobile device management software and the roll out of mobile enterprise applications are the main spending priorities for mobility.
APAC, TechTarget China
Google’s New Web3 Team Will Capitalize On Its Blockchain Dreams
Google Cloud is seeking to add personnel to its blockchain team. Read more in this article.
Google Cloud is creating a team tasked with developing services for enterprise clients seeking to leverage blockchain technology. Google Cloud is seeking to add personnel to its blockchain team. In an email, Google Cloud VP Amit Zavery said that the company’s cloud platform aims to become the first choice for developers working in Web3. He called Web3 a “market that is already demonstrating tremendous potential” and said that customers are requesting greater support for Web3 and cryptocurrency. Zavery clarified in a statement to CNBC that the division is “not trying to be part of [the] cryptocurrency wave directly.” Instead, it is providing companies with access to blockchain technology. In other words, the division will provide blockchain-as-a-service to enterprise users, giving those users the ability to navigate blockchain data or run blockchain nodes. The services will be similar to those offered by big tech companies such as Alibaba, Amazon, and, formerly, Microsoft—the latter of which ended its Azure blockchain services last year. Reports from CNBC also indicate that former Citigroup executive James Tromans, who joined Google in 2023, will lead the blockchain team and report to Zavery. Web3 is the decentralized version of the internet where cryptocurrencies are the main source of transactions. The creation of Web 3.0 poses a challenge to the current model of the internet wholly controlled by giants like Amazon, Google, and Meta Platforms.
Backend servicesGoogle Cloud aims to provide backend services to the developers who are working to build the ‘next generation of the internet.’ It seems that the firm has plans to go knee-deep in the world of digital assets as Cryptonary witnessed the partnership of the tech giant with Bakkt aimed to launch digital assets to consumers.
More Trending StoriesGoogle Cloud is creating a team tasked with developing services for enterprise clients seeking to leverage blockchain technology. Google Cloud is seeking to add personnel to its blockchain team. In an email, Google Cloud VP Amit Zavery said that the company’s cloud platform aims to become the first choice for developers working in Web3. He called Web3 a “market that is already demonstrating tremendous potential” and said that customers are requesting greater support for Web3 and cryptocurrency. Zavery clarified in a statement to CNBC that the division is “not trying to be part of [the] cryptocurrency wave directly.” Instead, it is providing companies with access to blockchain technology. In other words, the division will provide blockchain-as-a-service to enterprise users, giving those users the ability to navigate blockchain data or run blockchain nodes. The services will be similar to those offered by big tech companies such as Alibaba, Amazon, and, formerly, Microsoft—the latter of which ended its Azure blockchain services last year. Reports from CNBC also indicate that former Citigroup executive James Tromans, who joined Google in 2023, will lead the blockchain team and report to Zavery. Web3 is the decentralized version of the internet where cryptocurrencies are the main source of transactions. The creation of Web 3.0 poses a challenge to the current model of the internet wholly controlled by giants like Amazon, Google, and Meta Platforms.Google Cloud aims to provide backend services to the developers who are working to build the ‘next generation of the internet.’ It seems that the firm has plans to go knee-deep in the world of digital assets as Cryptonary witnessed the partnership of the tech giant with Bakkt aimed to launch digital assets to consumers.
Update the detailed information about Nft100 Spotlight: Julie Pacino’s Web3 Filmmaking Vision on the Bellydancehcm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!