You are reading the article Why Poetry Should Be Spoken updated in December 2023 on the website Bellydancehcm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Why Poetry Should Be Spoken
Over a distinguished career spanning more than four decades, Robert Pinsky has written hundreds of poems and amassed numerous honors.
Pinsky, a three time U.S. poet laureate and a College of Arts & Sciences professor of English, has won a National Endowment for the Humanities Fellowship as well as two of poetry’s most distinguished honors: the William Carlos Williams Award from the Poetry Society of America and the Lenore Marshall Poetry Prize. He has also been nominated for the Pulitzer Prize for Poetry.
Tonight, Pinsky will read from his newest book, Selected Poems (Farrar, Straus and Giroux, April 2011), at the Photonics Center at 7:30 p.m. His eighth volume of poetry, Selected Poems contains no new poems—rather it’s a kind of summing up of his career, an opportunity to distill a lifetime’s work into one volume. That got us wondering how, in a book like this, a poet as prolific as Pinsky goes about selecting what to include and what to leave out. And so we asked him.
BU Today: How did you choose what to include in Selected Poems? Is there anything that thematically connects these poems?One way or another, everything I’ve ever written has to do with the world of made and made-up things I was born into: religions, movies, people, customs, laws, foods, jokes, tragedies, wars, stupidities, technologies, a town, a family, truths, lies, words, gadgets, cheeses…and so forth. To put it differently, no matter where I start, I seem to end up thinking about culture, especially American culture, and my place in it.
Was there anything that surprised you in sifting through a lifetime’s work for this new book?The consistency surprised me, as it often does. I could add a sort of circular, rather than linear, sense of time. Another surprise was detecting, often in a submerged way, the Hebrew school of my childhood—not just beliefs or a worldview, but the feel of it, and formal matters like learning by rote a language with characters mainly for consonants, with vowels as subscripts. The chanting.
You chose not to include any new poems. Why?My friend, editor, and publisher Jonathan Galassi had a strong opinion that a “New and Selected” is an awkward combination of things. He argued for the elegance of publishing this book, and then, in a couple of years, a new book. As soon as Jon said that, I was convinced, and his opinion seemed echoed by something in me that wanted a pure summing-up at this point in my life.
In Selected Poems, the acts of both remembering and forgetting are frequent leitmotifs. What intrigues you about the subject?Forgetting becomes an increasingly important subject as you get older, not because of memory failure, so much, as because you have seen many things rise and then fade, in the world at large and in your own experience. Some are dear, some are trivial. I feel increasingly pious about my cultural ancestors and my literal ancestors: great-grandparents and their parents, important teachers and their teachers—all the people, mostly anonymous—who made this world and me. To think about them, and to think about the limits of my own life, is to think about forgetting and, yes, about remembering, too.
Is it true that being a jazz musician during high school inspired you to become a poet?Music was the only thing I did all right at in high school. It was the one competence that got me through rather difficult years. The attempt to be an artist (a pretty feeble attempt) kept my mind alive. Maybe it kept me alive!
What inspires you to sit down and write a poem?Great works of art inspire me: it’s not unlike what makes a three-year-old at a wedding begin to dance when the band starts up and the grown-ups begin to dance. So, Emily Dickinson and Ben Jonson and Walt Whitman and John Keats inspire me. Poems like Keats’ “Ode to a Nightingale” or Dickinson’s “Because I could not stop for death” or Jonson’s “Let it not your wonder move” give me a certain feeling, and I start to want to try to create that feeling.
You’ve been called America’s “civic poet.” Is that an accurate description?I don’t know. It does seem that I am more interested in writing about streets and towns and such than some people are. Maybe it comes more naturally to me because I grew up as a small-town provincial. Francis Fergusson’s description of the ancient Greek theater in The Idea of a Theatre—“civic” I guess—was formative, and remains central, for me.
You contend that poems should be spoken, not just read. Why?The medium is the reader’s voice. At tonight’s reading, I hope to give a hint of what I worked to achieve in the vowels, consonants, intonations, rhythms, trying to make them like gestures that refine and focus the meanings.
What do you hope readers take away from Selected Poems?Pleasure. A great feeling from hearing the sounds of meaning in the lines fitted to the tunes of the sentences.
Robert Pinsky will read from his new book, Selected Poems, tonight, Thursday, March 24, at 7:30 p.m. at the Photonics Center, Room 206, 8 St. Mary’s St. For more information, call 617-353-2821. The event is free and open to the public.
John O’Rourke can be reached at [email protected].
Explore Related Topics:
You're reading Why Poetry Should Be Spoken
Why Web 3 Should Be Green And Sustainable?
This article was published as a part of the Data Science Blogathon.
IntroductionThe arrival of Web 3 is about unlocking unimaginable value. The next chapter of the internet heralds a new economic paradigm, and emerging technologies will enable new business models that will transform the world. Over the next few decades, harnessing these opportunities will be a defining challenge for all businesses. At the same time, climate change is one of humanity’s most pressing issues, and addressing it is everyone’s responsibility. The ability to provide high-performance computing and infrastructure to support Web 3 development while minimizing environmental impact is critical to its future success.
The High Energy Consumption of Web 3 TechnologiesEven for experts in the field, assessing the environmental impact of such a vast ecosystem as a digital communication network is a guessing game. How many identifiable data centers are there? How many stealth centers exist (especially in the military and government sectors)? How many people are working above or below their capacity? How much power do they draw from the grid?
All transactions must be considered when calculating the internet’s power consumption and carbon footprint. Because Web3, like the traditional web, has layers, the only way to assess its long-term viability is by segment.
Web 3’s data traffic layer is already operational. The number of submarine cables, antennas, and data centers can be used to calculate the environmental impact. The impact of manufacturing these components, the impact of installation, and the impact of electricity use are all factors.
In terms of electricity consumption, the total data transmission consumption of the entire Internet amounts to somewhere between 260-340 TWh (roughly 1.4% of global electricity consumption).
The end-use layer, which corresponds to the devices (smartphone, notebook, etc.) that make transaction requests, is used for various purposes other than web3. It is even possible to argue that if web3 did not exist, it would have little impact on the production of these devices because devices geared solely or primarily for blockchain applications are still scarce on the market.
A similar conclusion can be drawn about the storage layer, which employs traditional computers and servers. There are few data centers dedicated to web3 data storage because the full node concept consists essentially of one computer per node, and it does not make much sense to build a large facility for this purpose, except in the case of renting virtual machines, where different users hire space in cloud services. However, these exist primarily to serve web2.
The Global Scenario:
Climate change is a worldwide coordination issue. The system has failed to coordinate effective policies and capital investments required to address humanity’s most pressing threat. Accelerated action and ambitious climate policies are required to adapt to climate change and rapidly reduce emissions to avoid greater loss of life, biodiversity, and infrastructure.
Global coordination technologies that can cut through the mass bureaucracy of climate action are desperately needed. This is where Web3 innovation could come in handy.
The Environmental Problem of Web 3The consensus layer is the real source of web3 criticism. The Proof of Work protocol is responsible for the bitcoin network consuming more energy than some countries.
PoW mining involves computers performing numerous calculations. These calculations are lottery-style attempts to “guess” a correct number. The first person to get it right gets to mine a block. Every 10 minutes, a new block is mined in the network, and the process is restarted. The more computing power a miner has, the more likely he will mine a block.
This is why Bitcoin consumes so much energy today. As the network’s usage and popularity grow, so does the financial value of Bitcoin, which encourages more miners to participate.
Currently, the energy cost of Bitcoin’s PoW is around 200 TWh, which is comparable to Thailand’s total consumption. Bitcoin’s annual carbon footprint is approximately 114 Mt CO2. This energy consumption is expected to rise in the future. However, Bitcoin supporters argue that the use of clean energy is increasing. According to the Bitcoin mining council, sustainable energy accounts for nearly 60% of the energy cost of Bitcoin mining today.
Making Web 3.0 Green and SustainableThere are many ways to work for a Green Web 3 future. They are:
Proof of Work consumes a lot of energy. The Proof of Stake protocol (Pos) operates differently than the Proof of Work (PoW) protocol. Instead of computers attempting to hit a number, PoS selects the miner based on the number of tokens he owns. The more tokens an agent possesses, the more likely he is to be selected. This explanation is quite simplistic and ignores several security and decentralization aspects of PoS protocols, but the basic concept is based on it.
There is no mining because there is no need to look for random numbers. Because of this, “miners” in a PoS protocol are referred to as “validators.” PoS protocols are thought to be long-lasting. The energy cost of a PoS machine is comparable to that of a laptop.
In the case of PoW, specific computers, known as ASICs, have been developed to do the job. A modern ASIC can consume up to 3000 W/h, and dozens of these machines can be found in a mining farm.
This is yet another critical aspect of the PoS protocol. There is no such thing as a “mining farm” because a pool only needs more delegated tokens, not more computers, to increase its chances of being chosen to validate a block. So, in addition to the energy operation benefit, there is an infrastructure benefit.
Crypto Coral Tribe is a community of 6565 NFTs that uses art and technology to drive marine and wildlife conservation. This includes using the funds raised from their first pledge to plant 3000 corals across three continents.
A community-driven approach is urgently required to design innovative climate solutions that leverage local and indigenous knowledge while iterating on alternative methods of driving impact. Web3 can accomplish this. Several Web3 projects are already in the works to redefine how humans interact with natural resources and the larger environment.
ConclusionThe proliferation of green Web3 initiatives eventually reflects a greater awareness of the critical need for sustainability and may allow us to envision a feasible reconciliation between technology and the environment.
To conclude:
Although web3 represents an evolution over web2 in many ways, it is unlikely that web3 will be more environmentally friendly than web2.
Decentralized networks necessitate a complex infrastructure and the implementation of consensus protocols, which can be energy intensive in some cases.
Now is the time for companies eager to stake their claim in the next frontier of computing, both in the metaverse and in driving the underlying software toward a more sustainable future.
Web 3.0 is here to stay and change the world. The responsibility of making it green and good for the environment falls on the people making the change.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Related
Why You Should Be Using Alexa On Your Smartphone
We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
You need the Alexa app for Android or iOS to set up any of the Amazon Echo speakers, but once you’ve got your Echo in place, you might not give much thought to the program on your phone—not when you can do so much with voice commands aimed at your smart speaker.
If you’re ignoring the app, though, you’re missing out. It’s much more than just a setup utility for the Echo speakers, and it just got a major overhaul that means it’s more intuitive to use than ever.
Get to know the home screenOpen up the Alexa app and tap Home to get to the home screen (if you’re not already on it). This displays a useful overview of the help you’re getting from Alexa: you’ll see upcoming reminders, lists that you’ve edited recently, and speaker skills that you might want to try (from making a call to controlling your smart home).
Right at the top is the Alexa button—just tap this (or say “Alexa”) to give the digital assistant a voice command. If you’re not sure of the best way to use Alexa, try tapping Browse skills or Browse things to try further down the home screen to see some of the tricks Alexa is capable of.
View reminders and listsIf you’ve set up a lot of reminders and lengthy lists through Alexa, it’s easier to browse through them on a screen than it is to hear them read out to you. Using the app can also be the better option if you’ve got some complicated reminders to set up or notes to add that Alexa might not understand if you were using your voice.
Recently set reminders and recently used lists appear on the app’s home screen, but you can see them in the app by tapping More, then Lists & Notes, or Reminders & Alarms. You can add new entries, edit entries that Alexa has already saved, tick items off your to-do list, cancel alarms and reminders, and generally manage the information Alexa has committed to memory.
Say you want to set up a recurring reminder to brush your teeth that comes through the bedroom Echo speaker every evening at 10. That’s quite a complex reminder to explain to Alexa using your voice, but it only takes a few taps in the app: More, Reminders & Alarms, then Add Reminder.
Control your smart homeControlling the smart home devices you’ve connected to Alexa is another task that can be a little easier on a phone screen than it is with your voice, depending on what you’re trying to do (and how many devices you’re trying to manage). You can view your connected smart home devices in the Alexa app by tapping Devices.
The subsequent screen will let you turn smart lights and smart plugs on and off, adjust the temperature of your smart thermostat, check the status of your smart smoke alarm, and get at any other device you’ve set up. To add new devices to your Alexa ecosystem, tap the plus button in the top right corner.
To access more detailed settings for a particular device (the brightness of a smart bulb, for example), tap on the device category and then the device itself. Where devices are grouped together—say all the smart lights on a particular floor—you can turn them on and off with a single tap.
Set up Alexa routinesRoutines are one of the most useful features Alexa offers, basically combining several tasks that are triggered by one command. You might say “good morning” to Alexa, for example, to have your heating turn on, your smart lights turn off, and your morning playlist start blasting through an Echo.
To configure routines for Alexa inside the mobile app, tap More, then Routines. You can edit existing routines by tapping on them, or create a new one via the plus button in the top right corner. You’ll need to give each routine a name, a command to trigger it, and a list of actions that happen as a result.
If you’re stuck for inspiration, open up the Featured tab to see some of Amazon’s suggested example routines—like one that will give you the weather forecast as soon as your morning alarm has finished. The more you play around with the options, the more ideas you’ll get for ways to simplify your life.
Manage music and audiobooksThe Play icon in the center of the Alexa navigation app is helpful if you’re using an Echo speaker to play music or audiobooks. Tap on it to pause and restart playback, skip forward or backward in the current playlist, and adjust the volume without having to shout voice commands at your smart speaker (which can be more convenient, especially if you’re in a different room).
Even better, the Play screen shows you what you’ve been listening to recently, whether that’s your music library, online radio stations, or audiobooks (exactly which options you’ll see will depend on the different services you’ve connected to Alexa). You can dive directly back into something right from the app.
To check what services you’ve got hooked up to Alexa, go to More, then Settings—you’ll see entries for Music, TV & Video, Flashing Briefing, Photos, and several others. In some cases, you can set default services that launch in response to voice commands when you don’t specify a particular service (such as Spotify or Apple Music).
There Should Be Billions Of Earths Out There. Why Can’t We Find Them?
In 2009, the Kepler space telescope constantly watched over some 200,000 stars in our corner of the Milky Way. It was looking for where life might exist—by pinpointing small, rocky planets in the temperate zones of warm, yellow suns, and figuring out just how special Earth is in the grand scheme of things. While the mission revolutionized the study of exoplanets, those main objectives went largely unfulfilled. A mechanical failure cut short Kepler’s initial survey in 2013. Astronomers would later discover just a single Earthlike planet in its dataset.
A decade later, researchers are finally closing in on some of the answers to the questions Kepler raised. Earthlike planets are probably rare, but not exceedingly so. Roughly one in five yellow stars could have one, according to a new analysis of Kepler’s data published in May in The Astronomical Journal. If the researchers’ conclusions are correct, that would mean the Milky Way might be home to nearly 6 billion Earths. Yet of the 4,000 likely exoplanets we’ve spotted, just one looks anything like our home planet. So where are the rest?
“[Truly Earthlike planets] are not hiding per se, it’s just that the sensitivity of our telescopes is simply not good enough yet [to find them],” says Dirk Schulze-Makuch, an astrobiologist at the Technical University Berlin, Germany, who was not involved with the research.
If astronomers want to find Earth 2.0s, research calculating the frequency of such worlds will give future telescopes their best chances of success.
Michelle Kunimoto, the exoplanet scientist who led the recent analysis, adopted one standard definition of what it takes to be an Earthlike planet: a world between three-quarters and 1.5 times as large across as ours, orbiting a sun-like (“G-type”) star, at between 0.99 and 1.7 times our orbital distance. Only Earth satisfies those criteria in our solar system, with Mars being too small and Venus orbiting too close for inclusion.
Worlds checking all three boxes are almost certainly out there, as Kunimoto’s work, which earned her a PhD from the University of British Columbia, suggests. But they’re hard to spot. Dimming from small planets is hard to see. Plus, they might transit in front of their star just once every few hundred days—and astronomers need at least three transits to confidently claim a detection. To make matters worse, yellow suns are rare to begin with, making up just 7 percent of the 400 billion stars in the Milky way. The vast majority of the galaxy’s stars are dim red dwarfs, which may bathe nearby planets in lethal flares.
Mission planners didn’t know it at launch, but Kepler had almost no chance of completing its initially intended search. To rack up three transits of slower planets orbiting at the outer edge of their suns’ habitable zones, the telescope would have needed to peer unwaveringly at the same patch of sky for more than seven years. But its pointing machinery broke down after four, long enough to find planets only in roughly the inner half of their stars’ temperate zones.
What’s more, Kepler was designed with our sun in mind. But our star turns out to be special in more ways than one. “The sun tends to be quite quiet,” Kunimoto says, while Kepler’s stars crackled more from their intrinsic burning. “Essentially, it’s a lot harder to find the Earthlike planets [than mission designers expected].”
Kepler delivered the scientific goods in the form of a huge haul of thousands of exoplanets, mostly massive giants hugging their host stars. But researchers have been trying to infer the less epic, more familiar worlds that Kepler couldn’t quite make out ever since. (Kepler 452b, which is 10% wider than Earth and has a year that’s only three weeks longer than ours, is one prominent Earthlike exception.)
The new work builds on a method developed by Danley Hsu, an astronomer at Penn State, in 2023. Previously, many researchers assumed there’d be an even spread of planet sizes and orbits, but as the population of exoplanets has grown, some kinds of worlds seem common than others. For planets with years shorter than 100 (Earth) days, for instance, many are 50% wider than Earth and many are 150% wider, but few have twice our planet’s girth. To accommodate these unexplained oddities, Hsu and Kunimoto both broke the Kepler data into many different categories of size and orbit and analyzed them all in a more independent way. Kunimoto went a step further and generated her own list of exoplanet candidates, not relying on the official catalogue.
In the end, Kunimoto found that an Earthlike planet may circle one in approximately every five sun like stars. She stresses that this figure represents an upper limit, however, and that the worlds could well be somewhat rarer. Her results represent an emerging consensus that the Earth-to-sun ratio of the local Milky Way should hover in the ballpark of 1:10. That figure remains a bit rough, Kunimoto acknowledges, but it’s tighter than the wide ranges published previously, which suggested between one earth per fifty suns, to two earths orbiting every single sun.
Schulze-Makuch calls the estimate “reasonable” and says that this kind of research gives us a valuable glimpse at the answers to otherwise unknowable questions, such as “whether our solar system is typical or kind of a freak system.”
He cautions, however, against letting one’s imagination run wild with images of a galaxy awash in billions of blue and green, cloud-studded orbs. The limited criteria of orbit, size, and star type say little about whether the planets have protective atmospheres and magnetic shielding, water, or the materials needed for life to emerge.
Estimates like Kunimoto’s may also shape future missions and give them more of a chance of finding more Earthlike planets than Kepler had. The more common these planets are, the more mission planners will be able to focus on designing instruments that scrutinize individual worlds, as opposed to wider sweeps.
Schulze-Makuch hopes, for instance, that the Keplers of the future will carry “star shades” that block out stars to capture exoplanets as single pixels, whose variations could betray the passage of seasons or the presence of ice caps. Such innovations could narrow researchers’ definitions of what it means to be an Earthlike planet, but he predicts a clear-cut discovery of a true Earth 2.0—one sculpted by life—remains a long way off.
“If we just use the technology we have right now,” he says, “it feels like we’re light years away.”
Why Your Community’s Next Solar Panel Project Should Be Above A Parking Lot
Solar canopies built above parking lots are an increasingly common sight around the country—you can already see these installed at university campuses, airports, and lots near commercial office buildings. Because the sun is a renewable resource, these solar canopies reduce greenhouse gas (GHG) emissions associated with energy production.
The clean energy benefits are clear: A 32-acre solar carport canopy at Rutgers University in New Jersey, for instance, produces about 8.8 megawatts of power, or about $1.2 million in electricity. They also make use of existing space to generate clean energy rather than occupying croplands, arid lands, and grasslands.
There may be other perks to adding solar panels over parking lots, too. Research shows that the benefits of solar canopies can be taken a step further if electric vehicles (EVs) are able to charge right in the parking lot. People can tap into this potential by installing EV chargers in solar carports, which makes charging more accessible for owners and creates a small-scale local energy grid for the community. The expense of installation and other barriers, though, can make deployment challenging.
EV charging in the carportA solar carport canopy with 286 solar modules is able to produce about 140 megawatt-hours of energy per year for EV charging, according to a new Scientific Reports study. That’s enough to provide electricity to more than 3,000 vehicles per month if each car parks for an hour. The authors say charging EVs this way can generate 94 percent lower total carbon dioxide emissions than electricity from traditional grid methods.
To maximize these benefits, smart technology that controls the timing and speed of charging is critical, says Lynn Daniels, manager at RMI’s Carbon-Free Transportation program who was not involved in the study. Smart charging allows users to optimize energy consumption by charging only when prices are cheaper due to low-energy demand or when more renewable energy is available on the grid.
[Related: Solar energy company wants to bolt panels directly into the ground]
EV ownership is growing so swiftly that entire electric grids are at risk of being stressed. If most owners across the US Western region continue to charge their EVs during nighttime, peak electricity demand can increase by up to 25 percent, according to a 2023 Applied Energy study. Accessible daytime charging at work or public charging stations would help address this problem and reduce GHG emissions.
There are ways to maximize emission reductions when smart-charging electric vehicles, according to a recent report from RMI, a nonprofit organization focusing on sustainability. “Our report found that, today, charging one million EVs at the right times is equivalent to taking between 20,000 and 80,000 internal combustion engine vehicles off the road,” says Daniels. If EVs represent 25 percent of vehicles by 2030, “emissions-optimized smart charging,” he adds, would be the equivalent of removing an additional 5.73 million automobiles with combustion engines.
A source of revenue, goodwill, and moreSolar canopies provide vehicles with protection from rain, sleet, hail, and other inclement weather, says Joshua M. Pearce, whose research specializes in solar photovoltaic technology and sustainable development at Western University in Canada. The shade they provide also means car owners may require less cooling from air conditioning at start-up because the vehicle didn’t stay under the sun. But that’s not all they can do.
A solar carport canopy with EV charging can be an opportunity for site owners to earn money if drivers have to pay a fee to charge their cars, says Daniels.
On the other hand, if businesses or large-scale retailers provide EV charging for free, Pearce says, that may develop goodwill with customers. Shoppers might spend more time and money while waiting for their cars to charge, allowing business owners to earn even more profit, he adds. And shopping centers have lots of potentially convertible areas: If Walmart deployed 11.1 gigawatts of solar canopies over its 3,571 Supercenter parking lots in the US, that would provide more than 346,000 solar-powered EV charging stations for 90 percent of Americans living within 15 miles of a store, according to a 2023 estimate.
[Related: What you need to know about converting your home to solar]
Solar canopies also save energy, since about 5 percent of electricity is lost each year as it travels from a power plant to your home or business. If the electricity the solar panels produce is used directly by the buildings they’re connected to or the EVs charging in the parking lots, transmission losses can be reduced, says Pearce.
The widespread deployment of solar canopies across parking lots may be an opportunity to create a small-scale local energy grid as well. The electrical grid is highly vulnerable to natural disasters, intentional physical attacks, and cyberattacks. Solar systems in parking lots can be used as anchors for microgrids—local, autonomous power systems that can remain operational while the main grid is down—that could make communities more resilient, “similar to how the US military uses solar to improve national security,” says Pearce.
Logistics of transforming parking lotsUpfront capital costs are the primary roadblocks to solar-powered carports with EV charging, says Pearce. The physical structure needs to be taller and more robust than a conventional solar farm, requiring more materials like metal and concrete, he adds. EV chargers also cost money, increasing the price even further. Commercial EV charging stations can cost around $2,500 to $40,000 for a single port. An installation often requires permits and approval from local authorities or inspectors, all of which are additional expenses and barriers to faster deployment.
The design of the solar array may be a challenge, too. “There’s a trade-off between right-sizing the solar array for current EV charging needs versus anticipated future demand and the costs of the solar array,” says Daniels. “The solar array design and location on the site can create significant variability in installation complexity and project costs.”
Daniels recommends raising awareness about the currently-available tax credits and other incentives, such as the federal solar tax credit that can deduct 30 percent of total commercial solar installation costs. There is a tax credit of 6 percent (with a maximum credit of $100,000 per unit) on commercial charging equipment as well, given that it is placed in a low-income community.
When it comes to new regulations, Pearce suggests that policymakers begin with a small step, like mandating solar-powered carports with EV charging capabilities for new surface parking or government-owned lots. After that, requirements for other locations like public universities could follow, he adds.
States or municipalities could also offer incentives other than the existing federal solar tax credit. To encourage state agencies, government offices, businesses, and nonprofits to install EV-charging solar canopies over parking lots, the Maryland Energy Administration’s Solar Canopy and Dual Use Technology Grant Program is offering grants. In 2023, one of these grants enabled IKEA to install a 1.5-megawatt solar canopy with EV charging stations at its Baltimore store.
Moreover, offering low- or no-interest loans to small- and medium-sized businesses can help them “keep up with the big firms investing millions in solar now simply to make money,” says Pearce. In general, if the federal government hopes to break one of the biggest barriers to the installation of solar canopies with EV charging capabilities, reducing upfront costs would be the key.
Why Must Text Data Be Pre
This article was published as a part of the Data Science Blogathon
Introductionthese. In fact, machines can’t understand any text data at all, be it the word “blah” or the word “machine”. They only understand numbers. So, over the decades scientists have been researching how to make machines understand our language. And thus they developed all the Natural Language Processing or NLP Techniques.
What is Natural Language processing?Natural language processing or NLP is a branch of Artificial Intelligence that deals with computer and human language interactions. NLP combines computational linguistics with statistical, machine learning, and deep learning models, allowing computers to understand languages. NLP helps computers to extract useful information from text data. Some of the real-world applications of NLP are,
Speech recognition – The task of converting voice data to text data.
Sentiment analysis- The task of extracting qualities like attitudes, emotions, etc. The most basic task in sentiment analysis is to classify the polarity of a sentence that is positive, negative, or neutral.
Natural language generation – The task of producing text data from some structured data.
Part-of-speech (POS) tagging – The task of tagging the Part of Speech of a particular word in a sentence based on its definition and context.
process text data are Count Vectorization, Tf-Idf Vectorization, etc. These techniques help to convert our text sentences into numeric vectors. Now, the question arises, ‘Aren’t we processing the data using these techniques? So why do we need to pre-process it?’ This article gives an explanation for that doubt of yours. Instances of python code are also provided.
Need for Pre-ProcessingRaw text data might contain unwanted or unimportant text due to which our results might not give efficient accuracy, and might make it hard to understand and analyze. So, proper pre-processing must be done on raw data.
Consider that you scraped some tweets from Twitter. For example,
” I am wayyyy too lazyyy!!! Never got out of bed for the whole 2 days. #lazy_days “
The sentences “I am wayyyy too lazyyy!!!” and “I am way too lazy”, both have the same semantic meaning, but gives us a totally different vibe, right. Depending on how these data get pre-processed, the results also differ. Pre-processing is therefore the most important task in NLP. It helps us remove all the unimportant things from our data and make our data ready for further processing.
Some Python libraries for text pre-processing Natural Language ToolKit (NLTK) :NLTK is a wonderful open-source Python library that provides modules for classification, tokenization, stemming, tagging, etc.
Gensim:Gensim is also an open-source Python library that mainly focuses on statistical semantics— estimation of the meanings of words using statistical methods, by looking at patterns of words in huge collections of texts. gensim.parsing.preprocessing module provides different methods for parsing and preprocessing strings.
Sci-kit Learn:Some modules in sci-kit learn also provide some text preprocessing tools. sklearn.feature_extraction.text provides a module for count vectorization, CountVectorizer() that includes text preprocessing, tokenizing, and filtering of stop words. CountVectorizer() module contains preprocessor (it strip_accents and lowercase letters), tokenizer, stop_words as attributes. If set, this module does these for you along with count vectorizing- converting the text documents to a matrix of token counts.
Each data and task requires different pre-processing. For example, consider the sentence, “I am wayyy too lazyyy!!!”. If your task is to extract the emotion of the sentence, the exclamation marks, and how the words “wayyy” and “lazyyy” are written, all these become important. But if your task is just to classify the polarity of the sentence, these do not become important. So you can remove the exclamation marks and stem the words “wayyy” to way and “lazyyy” too lazy, to make further processing easier. Depending on the task you want to achieve, the steps for pre-processing must also be carefully chosen.
Some simple steps to pre-process the given example tweet for the basic sentiment analysis task of classifying the polarity are given below. Here I have used the gensim library.
The first step of data pre-processing is,
encoding in the proper format
. utils.to_unicode module in the gensim library can be used for this. It converts a string (bytestring in encoding or Unicode), to unicode.
import gensim from gensim import utils s=" I am wayyyy too lazyyy!!! Never got out of bed for the whole 2 days. #lazy_days " s = utils.to_unicode(s) print(s)Output:
I am wayyyy too lazyyy!!! Never got out of bed for the whole 2 days. #lazy_days
Then convert all the uppercase letters to lowercase. “Never” and “never” are the same word, but the computer processes both as different words.
s = s.lower() print(s)Output:
i am wayyyy too lazyyy!!! never got out of bed for the whole 2 days. #lazy_days
Remove the tags and punctuations. They behave like noise in the text data since they have no semantic meaning.
import gensim.parsing.preprocessing as gp s=gp.strip_punctuation(s) s=gp.strip_tags(s) print(s)Output:
i am wayyyy too lazyyy never got out of bed for the whole 2 days lazy days
Remove all the numbers, because we are preparing the data for basic sentiment analysis (positive, negative, or neutral classification), where numbers are not important.
s=gp.strip_numeric(s) print(s)Output:
i am wayyyy too lazyyy never got out of bed for the whole days lazy days
Also, get rid of the multiple white spaces.
s=gp.strip_multiple_whitespaces(s) print(s)Output:
i am wayyyy too lazyyy never got out of bed for the whole days lazy dayslanguage. ‘Is’, ‘and’, ‘the’, etc are some stop words in English. It can improve accuracy a lot.
s=gp.remove_stopwords(s) print(s)Output:
wayyyy lazyyy got bed days lazy daysStemming is also a very important step. Stemming is the process of reducing the words to their roots. For example, ‘stemming’ to ‘stem’. The stem_text() function returns porter stemmed version of the string. Porter stemmer is known for its speed and simplicity.
s=gp.stem_text(s)These output strings can be used for further processing to convert into numeric vectors using techniques like Count Vectorization.
For our task of basic sentiment analysis, we have seen how to pre-process a single tweet till now. Hope you have understood each step clearly. The above processes can be done to a dataset of tweets.
import pandas as pd import gensim from gensim import utils import gensim.parsing.preprocessing as gp df = pd.read_csv(folderpath) #consider that df['tweets'] column contains tweets. def preprocess_text(s): s = utils.to_unicode(s) s = s.lower() s=gp.strip_punctuation(s) s=gp.strip_tags(s) s=gp.strip_numeric(s) s=gp.strip_multiple_whitespaces(s) s=gp.remove_stopwords(s) s=gp.stem_text(s) return s df['tweets']=df['tweets'].apply(str) #to convert each row of tweets column to string type df['tweets']=df['tweets'].apply(preprocess_text) #pass each row of tweets column to preprocess_text()Thus, we have preprocessed our dataset of tweets.
ConclusionPreprocessing text data is one of the most difficult tasks in Natural Language processing because there are no specific statistical guidelines available. It is also extremely important at the same time. Follow the steps that you feel are necessary to process the data depending on the task that you want to achieve.
Hope you enjoyed this article and learned something new. Thank you for reading. Feel free to share with your study buddies, if you liked this article.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Related
Update the detailed information about Why Poetry Should Be Spoken on the Bellydancehcm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!