Book Review: But What If We’re Wrong?

klostermanChuck Klosterman wrote one of the most stimulating books that I read in 2016: But What If We’re Wrong: Thinking About the Present As If It Were the Past. There are countless interesting observations on science and pop culture and sports and history. By contemplating which assumptions of today might be disproven in the future, or which authors of note today might be forgotten in the future and which no-name writers might become famous after their death, he unearths novel theories about familiar topics. Why did Herman Melville become renowned by future generations but not his own? Which theories of science are we totally convinced are true today but may well be proven false by future generations of physicists? Which TV show made in 2016 will be referenced by historians in 2080 when they try to explain what life was like in 2016?

My favorite paragraphs are pasted below, with the bold font mine.

Thanks to Russ Roberts for recommending this via his Econtalk conversation with Chuck.


When you ask smart people if they believe there are major ideas currently accepted by the culture at large that will eventually be proven false, they will say, “Well, of course. There must be. That phenomenon has been experienced by every generation who’s ever lived.”

Aristotle had argued more than a thousand years prior: He believed all objects craved their “natural place,” and that this place was the geocentric center of the universe, and that the geocentric center of the universe was Earth. In other words, Aristotle believed that a dropped rock fell to the earth because rocks belonged on earth and wanted to be there.

For the next thirty years, nothing about the reception of [Moby Dick] changes. But then World War I happens, and—somehow, and for reasons that can’t be totally explained—modernists living in postwar America start to view literature through a different lens. There is a Melville revival. The concept of what a novel is supposed to accomplish shifts in his direction and amplifies with each passing generation…

I suspect most conventionally intelligent people are naïve realists, and I think it might be the defining intellectual quality of this era. The straightforward definition of naïve realism doesn’t seem that outlandish: It’s a theory that suggests the world is exactly as it appears.

Any time you talk to police (or lawyers, or journalists) about any kind of inherently unsolvable mystery, you will inevitably find yourself confronted with the concept of Occam’s Razor: the philosophical argument that the best hypothesis is the one involving the lowest number of assumptions.

The reason something becomes retrospectively significant in a far-flung future is detached from the reason it was significant at the time of its creation—and that’s almost always due to a recalibration of social ideologies that future generations will accept as normative.

The arc of Lethem’s larger contention boils down to two points. The first is that no one is really remembered over the long haul, beyond a few totemic figures—Joyce, Shakespeare, Homer—and that these figures serve as placeholders for the muddled generalization of greatness (“Time is a motherfucker and it’s coming for all of us,” Lethem notes).

The reason shadow histories remained in the shadows lay in the centralization of information: If an idea wasn’t discussed on one of three major networks or on the pages of a major daily newspaper or national magazine, it was almost impossible for that idea to gain traction with anyone who wasn’t consciously searching for alternative perspectives. That era is now over. There is no centralized information, so every idea has the same potential for distribution and acceptance.

Competing modes of discourse no longer “compete.” They coexist.

Take, for example, the plight of Native Americans. What American subculture has suffered more irrevocably? Prior to Columbus’s landing in the New World, the Native American population approached one hundred million. Now it’s a little over three million, two-thirds of whom are relegated to fifty delineated reservations on mostly undesirable land. Still, that equates to roughly 1 percent of the total US population. Yet Native Americans are essentially voiceless, even in conversations that specifically decry the lack of minority representation. Who is the most prominent Native American media figure or politician? Sherman Alexie? Louise Erdrich? Tom Cole or Markwayne Mullin, both of whom are from the same state? Who, for that matter, is the most famous Native American athlete, or rapper, or reality star? Maybe Sam Bradford? Maybe Sacheen Littlefeather, who’s been virtually invisible since the seventies? When the Academy Awards committee next announces the nominations for Best Picture, how many complaints will focus on the lack of films reflecting the Native American experience? Outside the anguish expressed over the use of the term “Redskin” by the Washington football franchise, it’s hard to find conversation about the biases facing Native Americans; outside the TV show Fargo, you almost never see it reflected in the popular culture. Everyone concedes it exists, but it’s not a popular prejudice (at least not among the mostly white liberals who drive these conversations). Their marginalization is ignored, thus creating a fertile factory for the kind of brilliant outsider who won’t be recognized until that artist is dead and gone. So this is one possibility—a Navajo Kafka.

 Kurt Vonnegut’s A Man Without a Country: “I think that novels that leave out technology misrepresent life as badly as Victorians misrepresented life by leaving out sex.”

…the myth of universal timeliness. There is a misguided belief—often promoted by creative writing programs—that producing fiction excessively tied to technology or popular culture cheapens the work and detracts from its value over time. If, for example, you create a plot twist that hinges on the use of an iPad, that story will (allegedly) become irrelevant once iPads are replaced by a new form of technology. If a character in your story is obsessed with watching Cheers reruns, the meaning of that obsession will (supposedly) evaporate once Cheers disappears from syndication. If your late-nineties novel is consumed with Monica Lewinsky, the rest of the story (purportedly) devolves into period piece. The goal, according to advocates of this philosophy, is to build a narrative that has no irretraceable connection to the temporary world. But that’s idiotic, for at least two reasons. The first is that it’s impossible to generate deep verisimilitude without specificity. The second is that if you hide the temporary world and the work somehow does beat the odds and become timeless, the temporary world you hid will become the only thing anyone cares about

But I’ve been a paid critic for enough years to know my profession regularly overrates many, many things by automatically classifying them as potentially underrated. The two terms have become nonsensically interchangeable.

The nonfiction wing of this level houses elemental tacticians like Robert Caro; someone like William T. Vollmann straddles both lines, fortified by his sublime recklessness. Even the lesser books from these writers are historically important, because—once you’re defined as great—failures become biographically instructive. 

The third tier houses commercial writers who dependably publish major or minor bestsellers and whose success or failure is generally viewed as a reflection of how much (or how little) those books sell. These individuals are occasionally viewed as “great at writing,” but rarely as great writers. They are envied and discounted at the same time. They are what I call “vocally unrated”: A large amount of critical thought is directed toward explaining how these types of novels are not worth thinking about.

Now, if the world were logical, certain predictions could be made about what bricks from that pyramid will have the greatest likelihood of remaining intact after centuries of erosion. Devoid of all other information, a betting man would have to select a level-one writer like Roth, just as any betting man would take the Yankees if forced to wager on who will win the World Series one hundred seasons from now. If you don’t know what the weather will be like tomorrow, assume it will be pretty much the same as today. But this would require an astonishing cultural stasis. It would not simply mean that the way we presently consume and consider Roth will be the way Roth is consumed and considered forevermore; it would mean that the manner in which we value and assess all novels will remain unchanged. It also means Roth must survive his inevitable post-life reevaluation by the first generation of academics who weren’t born until he was already gone, a scenario where there will be no room for advancement and plenty of room for diminishing perceptions (no future contrarian can provocatively claim, “Roth is actually better than everyone thought at the time,” because—at the time—everyone accepted that he was viewed as remarkable). He is the safest bet, but still not a safe bet. Which is why I find myself fixated on the third and sixth tiers of my imaginary triangle: “the unrated.” As specific examples, they all face immeasurable odds. But as a class, they share certain perverse advantages.

Normal consumers declare rock to be dead whenever they personally stop listening to it (or at least to new iterations of it), which typically happens about two years after they graduate from college.

The Beatles were the first major band to write their own songs, thus making songwriting a prerequisite for credibility; they also released tracks that unintentionally spawned entire subgenres of rock, such as heavy metal (“Helter Skelter”), psychedelia (“Tomorrow Never Knows”), and country rock (“I’ll Cry Instead”).

Do I think the Beatles will be remembered in three hundred years? Yes. I believe the Beatles will be the Sousa of Rock (alongside Michael Jackson, the Sousa of Pop22). If this were a book of predictions, that’s the prediction I’d make. But this is not a book about being right. This is a book about being wrong, and my faith in wrongness is greater than my faith in the Beatles’ unassailability. What I think will happen is probably not what’s going to happen. So I will consider what might happen instead.

Since rock, pop, and rap are so closely tied to youth culture, there’s an undying belief that young people are the only ones who can really know what’s good. It’s the only major art form where the opinion of a random fourteen-year-old is considered more relevant than the analysis of a sixty-four-year-old scholar. (This is why it’s so common to see aging music writers championing new acts that will later seem comically overrated—once they hit a certain age, pop critics feel an obligation to question their own taste.)

Take architecture: Here we have a creative process of immense functional consequence. It’s the backbone of the urban world we inhabit, and it’s an art form most people vaguely understand—an architect is a person who designs a structure on paper, and that design emerges as the structure itself. Architects fuse aesthetics with physics and sociology. And there is a deep consensus over who did this best, at least among non-architects: If we walked down the street of any American city and asked people to name the greatest architect of the twentieth century, most would say Frank Lloyd Wright. In fact, if someone provided a different answer, we’d have to assume we’ve stumbled across an actual working architect, an architectural historian, or a personal friend of Frank Gehry. Of course, most individuals in those subsets would cite Wright, too. But in order for someone to argue in favor of any architect except Wright (or even to be in a position to name three other plausible candidates), that person would almost need to be an expert in architecture. Normal humans don’t possess enough information to nominate alternative possibilities. And what emerges from that social condition is an insane kind of logic: Frank Lloyd Wright is indisputably the greatest architect of the twentieth century, and the only people who’d potentially disagree with that assertion are those who legitimately understand the question. History is defined by people who don’t really understand what they are defining.

I don’t believe all art is the same. I wouldn’t be a critic if I did. Subjective distinctions can be made, and those distinctions are worth quibbling about. The juice of life is derived from arguments that don’t seem obvious. But I don’t believe subjective distinctions about quality transcend to anything close to objective truth—and every time somebody tries to prove otherwise, the results are inevitably galvanized by whatever it is they get wrong.

To matter forever, you need to matter to those who don’t care. And if that strikes you as sad, be sad.

But maybe it takes an idiot to pose this non-idiotic question: How do we know we’re not currently living in our own version of the year 1599? According to Tyson, we have not reinvented our understanding of scientific reality since the seventeenth century. Our beliefs have been relatively secure for roughly four hundred years. That’s a long time—except in the context of science. In science, four hundred years is a grain in the hourglass.

One of Greene’s high-profile signatures is his support for the concept of “the multiverse.” Now, what follows will be an oversimplification—but here’s what that connotes: Generally, we work from the assumption that there is one universe, and that our galaxy is a component of this one singular universe that emerged from the Big Bang. But the multiverse notion suggests there are infinite (or at least numerous) universes beyond our own, existing as alternative realities. Imagine an endless roll of bubble wrap; our universe (and everything in it) would be one tiny bubble, and all the other bubbles would be other universes that are equally vast. In his book The Hidden Reality, Greene maps out nine types of parallel universes within this hypothetical system.

“In physics, when we say we know something, it’s very simple,” Tyson reiterates. “Can we predict the outcome? If we can predict the outcome, we’re good to go, and we’re on to the next problem. There are philosophers who care about the understanding of why that was the outcome. Isaac Newton [essentially] said, ‘I have an equation that says why the moon is in orbit. I have no fucking idea how the Earth talks to the moon. It’s empty space—there’s no hand reaching out.’

Galileo famously refused to chill and published his Dialogue Concerning the Two Chief World Systems as soon as he possibly could, mocking all those who believed (or claimed to believe) that the Earth was the center of the universe. The pope, predictably, was not stoked to hear this. But the Vatican still didn’t execute Galileo; he merely spent the rest of his life under house arrest (where he was still allowed to write books about physics) and lived to be seventy-seven.

What Bostrom is asserting is that there are three possibilities about the future, one of which must be true. The first possibility is that the human race becomes extinct before reaching the stage where such a high-level simulation could be built. The second possibility is that humans do reach that stage, but for whatever reason—legality, ethics, or simple disinterest—no one ever tries to simulate the complete experience of civilization. The third possibility is that we are living in a simulation right now. Why? Because if it’s possible to create this level of computer simulation (and if it’s legally and socially acceptable to do so), there won’t just be one simulation. There will be an almost limitless number of competing simulations, all of which would be disconnected from each other. A computer program could be created that does nothing except generate new simulations, all day long, for a thousand consecutive years. And once those various simulated societies reach technological maturity, they would (assumedly) start creating simulations of their own—simulations inside of simulations.

The term “conspiracy theory” has an irrevocable public relations problem. Technically, it’s just an expository description for a certain class of unproven scenario. But the problem is that it can’t be self-applied without immediately obliterating whatever it’s allegedly describing. You can say, “I suspect a conspiracy,” and you can say, “I have a theory.” But you can’t say, “I have a conspiracy theory.” Because if you do, it will be assumed that even you don’t entirely believe the conspiracy you’re theorizing about.

But it still must be asked: Discounting those events that occurred within your own lifetime, what do you know about human history that was not communicated to you by someone else? This is a question with only one possible answer.

This, it seems, has become the standard way to compartmentalize a collective, fantastical phenomenon: Dreaming is just something semi-interesting that happens when our mind is at rest—and when it happens in someone else’s mind (and that person insists on describing it to us at breakfast), it isn’t interesting at all.

[On the Buzzfeed blue vs. gold dress viral phenom.] The next day, countless pundits tried to explain why this had transpired. None of their explanations were particularly convincing. Most were rooted in the idea that this happened because we were all looking at a photo of a dress, as opposed to the dress itself. But that only shifts the debate, without really changing it—why, exactly, would two people see the same photograph in two completely different ways?

Adams is the author of On the Genealogy of Color. He believes the topic of color is the most concrete way to consider the question of how much—or how little—our experience with reality is shared with the experience of other people. It’s an unwieldy subject that straddles both philosophy and science. On one hand, it’s a physics argument about the essential role light plays in our perception of color; at the same time, it’s a semantic argument over how color is linguistically described differently by different people. There’s also a historical component: Up until the discovery of color blindness in the seventeenth century, it was assumed that everyone saw everything the same way (and it took another two hundred years before we realized how much person-to-person variation there is). What really changed four hundred years ago was due (once again) to the work of Newton and Descartes, this time in the field of optics. Instead of things appearing “red” simply because of their intrinsic “redness” (which is what Aristotle believed), Newton and Descartes realized it has to do with an object’s relationship to light.

On the same day I spoke with Linklater about dreams, there was a story in The New York Times about a violent incident that had occurred a few days prior in Manhattan. A man had attacked a female police officer with a hammer and was shot by the policewoman’s partner. This shooting occurred at ten a.m., on the street, in the vicinity of Penn Station. Now, one assumes seeing a maniac swinging a hammer at a cop’s skull before being shot in broad daylight would be the kind of moment that sticks in a person’s mind. Yet the Times story explained how at least two of the eyewitness accounts of this event ended up being wrong. Linklater was fascinated by this: “False memories, received memories, how we fill in the blanks of conjecture, the way the brain fills in those spaces with something that is technically incorrect—all of these errors allow us to make sense of the world, and are somehow accepted enough to be admissible in a court of law. They are accepted enough to put someone in prison.” And this, remember, was a violent incident that had happened only hours before. The witnesses were describing something that had happened that same day, and they had no incentive to lie.

How much of history is classified as true simply because it can’t be sufficiently proven false?

All of which demands a predictable question: What significant historical event is most likely wrong? And not because of things we know that contradict it, but because of the way wrongness works.

When D. T. Max published his posthumous biography of David Foster Wallace, it was depressing to discover that many of the most memorable, electrifying anecdotes from Wallace’s nonfiction were total fabrications.

In Ken Burns’s documentary series The Civil War, the most fascinating glimpses of the conflict come from personal letters written by soldiers and mailed to their families. When these letters are read aloud, they almost make me cry. I robotically consume those epistles as personal distillations of historical fact. There is not one moment of The Civil War that feels false. But why is that? Why do I assume the things Confederate soldiers wrote to their wives might not be wildly exaggerated, or inaccurate, or straight-up untruths?

I doubt the current structure of television will exist in two hundred fifty years, or even in twenty-five. People will still want cheap escapism, and something will certainly satisfy that desire (in the same way television does now). But whatever that something is won’t be anything like the television of today. It might be immersive and virtual (like a Star Trekian holodeck) or it might be mobile and open-sourced (like a universal YouTube, lodged inside our retinas). But it absolutely won’t be small groups of people, sitting together in the living room, staring at a two-dimensional thirty-one-inch rectangle for thirty consecutive minutes, consuming linear content packaged by a cable company.

[To understand a given era through a TV show.] We’d want a TV show that provided the most realistic portrait of the society that created it, without the self-aware baggage embedded in any overt attempt at doing so. In this hypothetical scenario, the most accurate depiction of ancient Egypt would come from a fictional product that achieved this goal accidentally, without even trying. Because that’s the way it always is, with everything. True naturalism can only be a product of the unconscious. So apply this philosophy to ourselves, and

To attack True Detective or Lost or Twin Peaks as “unrealistic” is a willful misinterpretation of the intent. We don’t need television to accurately depict literal life, because life can literally be found by stepping outside.

If anyone on a TV show employed the stilted, posh, mid-Atlantic accent of stage actors, it would instantly seem preposterous; outside a few notable exceptions, the goal of televised conversation is fashionable naturalism. But vocal delivery is only a fraction of this equation. There’s also the issue of word choice: It took decades for screenwriters to realize that no adults have ever walked into a tavern and said, “I’ll have a beer,” without noting what specific brand of beer they wanted 

But when a show’s internal rules are good, the viewer is convinced that they’re seeing something close to life. When the rom-com series Catastrophe debuted on Amazon, a close friend tried to explain why the program seemed unusually true to him. “This is the first show I can ever remember,” he said, “where the characters laugh at each other’s jokes in a non-obnoxious way.” This seemingly simple idea was, in fact, pretty novel—prior to Catastrophe, individuals on sitcoms constantly made hilarious remarks that no one seemed to notice were hilarious. For decades, this was an unspoken, internal rule: No one laughs at anything. So seeing characters laugh naturally at things that were plainly funny was a new level of realness. The way a TV show is photographed and staged (this is point number three) are industrial attributes that take advantage of viewers’ preexisting familiarity with the medium: When a fictional drama is filmed like a news documentary, audiences unconsciously absorb the action as extra-authentic (a scene shot from a single mobile perspective, like most of Friday Night Lights, always feels closer to reality than scenes captured with three stationary cameras, like most of How I Met Your Mother).

What is the realest fake thing we’ve ever made on purpose? 

Nothing on TV looks faker than failed attempts at realism. A show like The Bachelor is instantly recognized (by pretty much everyone, including its intended audience) as a prefab version of how such events might theoretically play out in a distant actuality. No television show has ever had a more paradoxical title than MTV’s The Real World, which proved to be the paradoxical foundation of its success.

Roseanne was the most accidentally realistic TV show there ever was…By the standards of TV, both of these people were wildly overweight. Yet what made Roseanne atypical was how rarely those weight issues were discussed. Roseanne was the first American TV show comfortable with the statistical reality that most Americans are fat. And it placed these fat people in a messy house, with most of the key interpersonal conversations happening in the kitchen or the garage or the laundry room. These fat people had three non-gorgeous kids, and the kids complained constantly, and two of them were weird and one never smiled.

The less incendiary take on football’s future suggests that it will continue, but in a different shape. It becomes a regional sport, primarily confined to places where football is ingrained in the day-to-day culture (Florida, Texas, etc.). Its fanbase resembles that of contemporary boxing—rich people watching poor people play a game they would never play themselves.

A few months after being hired as head football coach at the University of Michigan, Jim Harbaugh was profiled on the HBO magazine show Real Sports. It was a wildly entertaining segment, heavily slanted toward the intellection that Harbaugh is a lunatic. One of the last things Harbaugh said in the interview was this: “I love football. Love it. Love it. I think it’s the last bastion of hope for toughness in America in men, in males.”

“But look what happened to boxing,” people will say (and these people sometimes include me). “Boxing was the biggest sport in America during the 1920s, and now it exists on the fringes of society. It was just too brutal.” Yet when Floyd Mayweather fought Manny Pacquiao in May of 2015, the fight grossed $400 million, and the main complaint from spectators was that the fight was not brutal enough. Because it operates on a much smaller scale, boxing is—inside its own crooked version of reality—flourishing. It doesn’t seem like it, because the average person doesn’t care. But boxing doesn’t need average people. It’s not really a sport anymore. It’s a mildly perverse masculine novelty, and that’s enough to keep it relevant.

Midway through the episode, the show’s producers try to mathematically verify if youth participation in football is decreasing as much as we suspect. It is. But the specificity of that stat is deceiving: It turns out youth participation is down for all major sports—football, basketball, baseball, and even soccer (the so-called sport of the future). Around the same time, The Wall Street Journal ran a similar story with similar statistics: For all kids between six and eighteen (boys and girls alike), overall participation in team sports was down 4 percent.

But sometimes the reactionaries are right. It’s wholly possible that the nature of electronic gaming has instilled an expectation of success in young people that makes physical sports less desirable. There’s also the possibility that video games are more inclusive, that they give the child more control, and that they’re simply easier for kids who lack natural physical gifts. All of which point to an incontestable conclusion: Compared to traditional athletics, video game culture is much closer to the (allegedly) enlightened world we (supposedly) want to inhabit.

The gap for the Famous Idaho Potato Bowl was even greater—the human attendance was under 18,000 while the TV audience approached 1.5 million. This prompted USA Today to examine the bizarre possibility of future bowl games being played inside gigantic television studios, devoid of crowds.

What makes the United States so interesting and (arguably) “exceptional” is that it’s a superpower that did not happen accidentally. It did not evolve out of a preexisting system that had been the only system its founders could ever remember; it was planned and strategized from scratch, and it was built to last. Just about everyone agrees the founding fathers did a remarkably good job, considering the impossibility of the goal.

This logic leads to a strange question: If and when the United States does ultimately collapse, will that breakdown be a consequence of the Constitution itself? If it can be reasonably argued that it’s impossible to create a document that can withstand the evolution of any society for five hundred or a thousand or five thousand years, doesn’t that mean present-day America’s pathological adherence to the document we happened to inherit will eventually wreck everything?

Wexler notes a few constitutional weaknesses, some hypothetical and dramatic (e.g., what if the obstacles created to make it difficult for a president to declare war allow an enemy to annihilate us with nuclear weapons while we debate the danger) and some that may have outlived their logical practicality without any significant downside (e.g., California and Rhode Island having equal representation in the Senate, regardless of population).

But I would traditionally counter that Washington’s One Big Thing mattered more, and it actually involved something he didn’t do: He declined the opportunity to become king, thus making the office of president more important than any person who would ever hold it. This, as it turns out, never really happened. There is no evidence that Washington was ever given the chance to become king, and—considering how much he and his peers despised the mere possibility of tyranny—it’s hard to imagine this offer was ever on the table.

Washington’s kingship denial falls into the category of a “utility myth”—a story that supports whatever political position the storyteller happens to hold, since no one disagrees with the myth’s core message (i.e., that there are no problems with the design of our government, even if that design allows certain people to miss the point).

…Every strength is a weakness, if given enough time.

Back in the landlocked eighties, Dave Barry offhandedly wrote something pretty insightful about the nature of revisionism. He noted how—as a fifth-grader—he was told that the cause of the Civil War was slavery. Upon entering high school, he was told that the cause was not slavery, but economic factors. At college, he learned that it was not economic factors but acculturalized regionalism. But if Barry had gone to graduate school, the answer to what caused the Civil War would (once again) be slavery.

Much of the staid lionization of Citizen Kane revolves around structural techniques that had never been done before 1941. It is, somewhat famously, the first major movie where the ceilings of rooms are visible to the audience. This might seem like an insignificant detail, but—because no one prior to Kane cinematographer Gregg Toland had figured out a reasonable way to get ceilings into the frame—there’s an intangible, organic realism to Citizen Kane that advances it beyond its time period. Those visible ceilings are a meaningful modernization that twenty-first-century audiences barely notice.

There’s growing evidence that the octopus is far more intelligent than most people ever imagined, partially because most people always assumed they were gross, delicious morons.

3 comments on “Book Review: But What If We’re Wrong?
  • Regarding Chuck Klosterman’s assertion that “If we walked down the street of any American city and asked people to name the greatest architect of the twentieth century, most would say Frank Lloyd Wright”, I daresay that “most” Americans have no idea who Frank Lloyd Wright is. I also realize that this is informal writing, but I shouldn’t have to say that there are no “normal” consumers, or “normal” humans.

    I was struck by how often in just these tea-leaf reading excerpts Klosterman uses the word “most”: “most conventionally intelligent”, “most realistic”, “most people”, “most individuals”, “most fascinating”, “most realistic”, “most accurate”, “most people always assumed they were gross, delicious morons.” I would venture that “most” people have never thought for two seconds about the intelligence of octopuses or the verisimilitude of David Foster Wallace’s writing, but that anyone who did and was surprised to “discover [from a biography] that many of the most memorable, electrifying anecdotes from Wallace’s nonfiction were total fabrications” is a “most” gross, possibly delicious, moron.;-)

  • according to quarter The trend is that GDP growth is likely to fall below 9% in the fourth quarter, and the macro economy faces a greater risk of hard landing in the future. Therefore,

Leave A Comment

Your email address will not be published. Required fields are marked *