A campaign has insufficient funds or poor targeting.
Successful outcome of a PPC campaign requires alot of data for statistics. Unfortunately this gets restricted by two things:
- Inadequate targeting
- Insufficient budget
If you don’t have a big enough audience or budget to get the data you need, it’s very difficult to have any media impact and be able to measure results, to make informed adjustments to your strategy.
In this example we have used the following:
- Traffic Source: AdWords
- Overarching Niche: Industrial
- Conversion Tracking: Lead Generation (Phone Calls & Form Submissions)
- Duration: 4 Months
Results after 4 months of campaign launch.
- The total number of qualified leads (including branded) increased 85.71%
- The total number of qualified leads (excluding branded) increased 140.00%
- The average cost per lead (including branded) decreased 31.09%
- The average cost per lead (excluding branded) decreased 45.61%
- The overall average cost per click decreased 45.02%
Notice the lead increase percentages seem high, because it’s small sample size.
Everyone has their own preferences on how to set up and structure an AdWords account. This account has 9 campaigns with a shared budget. By doing so we could easily segment the budgets at a later date, should campaign performance provide a reason to do so.
In addition, we also utilized the following ad extensions:
- Sitelink Extensions
- Call Extensions
- Callout Extensions
With a typical digital agency agreement the client expects a maximum ad spend but no target CPA. In other words the client wants the agency to “Generate as many leads as possible for as little dollars or Dirhams as possible.
In a strong or good economy most clients will see a positive return on their ad spend, in other words above average CPA (Cost Per Aquisition) in an ideal world it follows they would happily increase their budget with the confidence they would see a positive return ROI on their spend.
However in the real world alot of ad budgets are capped (at the beginning of their financial year) which translates often to high client expectations with low budgets available.
Month #1: Data accumulation
Month #1 is always about collecting data. You need a two to three weeks of data collection before making educated decisions, particularly when the budget is limited. As you can see below we’ve very few alterations in the first month but gathered a lot of valuable information.
What this means.
1. Campaign #7 is highly competitive.
Although the image above doesn’t show any Dirham/dollar signs, you can compare the Cost (%) column of campaign #7 with campaign #2 and campaign #5 to see that these three are the big spenders.
What really separates #7 from #2 and #5 is the number of clicks. We’re getting less than 25% of the volume, which means we’re paying more than four times as much per click. It’s expensive, and it’s not converting.
2. Our targeting is maxed out.
Take a look at the last three columns. Our search impression share (the number of impressions we received divided by the number of impressions we were eligible to receive) is around 75%. The other two columns explain why we aren’t capturing the other 25%.
Notice in the last column, we there is less than 5% this is due to our budget. Which means we’re spending about as much money as we can, which is a problem because we’re only spending 70% of the client’s target budget. They want to make the most of their ad spend, and I want to help them do that.
The Search Lost IS (rank) column shows that 20% of our lost impression share is due to rank, or lack thereof. We can overcome this with the following options:
- Increase our bids. We can try to buy the #1 ad slot every time. However this usually doesn’t work unless you have proven intel/info to support staying on top position converts better than other positions, this is the quickest way to spend your client’s budget.
- Change the campaign delivery method from standard to accelerated. This tells Google not to optimize the delivery of your ads. This needs to be backed up with data/info that supports this decision.
- Improve relevancy / quality score. This means making sure our ads closely match the keywords you’re targeting, among other things. We might get more clicks, but we’ll also pay less for them.
3. Look closely at Campaigns #2 and #5.
Ignore campaign #3 for the moment.
That’s because it’s a brand campaign, targeting users already searching for the company.
Aside from that, campaigns 2 & 5 are the campaigns converting and driving substantial volume.
At the end of the month the client received a report detailing these key points.
Second month client and agency decided on action the following items:
Month #2: Trying to get better leads
Here are the modifications we did.
Stopped/paused Campaign #7
If you look back at the picture detailing each campaign’s performance, you can see that we only received 29 clicks. That is not statistically significant data, and does not provide enough information to accurately determine whether or not the campaign was a success or failure.
With that said, we spent a lot of money. The average cost per click was much higher than the other campaigns. The decision can be argued both ways, but we ultimately chose to shut it down.
We Increased Our Geotargeting
Our targeting had reach maximum before stopping campaign #7. The obvious solution was to widen our coverage to appeal to a bigger geographic audience/population.
We Launched a New Campaign
In addition to covering more of the map, we launched a new campaign targeting more keywords. The objective was for this campaign to help supplement the loss in spend from campaign #7.
We Optimized Campaign #2 & Campaign #5
“optimized” means setting up Single Keyword Ad Groups. This takes a good chunk of time to do, so it might not be practical to build every account with SKAGs from the start. However, once you know where the conversions are, it’s time to SKAG it up.
After implementing these changes we let the campaigns run for the rest of the month with minimal adjustments. Here are the results.
What are the major differences?
1. Notice the number of leads decreased
What you may not notice here is that we spent much closer to our target budget (93% instead of 70%). If we’re spending more money, we should be getting more conversions, right?
If you look carefully at which campaigns are converting and you’ll see that campaign #3 – the brand campaign – didn’t generate a single conversion. With only 22 clicks, it’s hard to determine why. What we do know is that it’s not entirely fair to put the brand campaign in the same category as the others. Brand campaigns are expected to convert. The users clicking on those ads are already familiar with the business.
Simply put, we got less conversions, but we got harder-to-get conversions. That’s a win.
2. Campaign #10 is a gift from the heavens.
Profitable campaigns aren’t launched. They’re optimized.
Sometimes you get lucky and launch a campaign like this. That’s what happened here. We got lucky.
That’s about it for big changes in month #2.
Overall performance and developed was discussed and a plan for month #3 devised.
Month #3: Further refinement
We stopped over Half of the Campaigns
If the campaign hadn’t delivered a lead in the past two months it was stopped. Many of the campaigns weren’t driving enough volume to receive statistically significant data. They weren’t necessarily hurting anything, but managing them took time away from managing campaigns that were converting.
We Changed the Landing Pages
With campaign #10 we had tried something different from the other campaigns. Rather than send users to individualized landing pages, we sent them to the homepage. Since it seemed to work so well, we tried the same thing in the other campaigns to see if it increased conversions.
Below is the data from month #3.
These metrics look pretty good, but actually they’re a bit misleading.
1. We nearly DOUBLED our conversions.
The total conversions went from 13 to 25.
Our non-branded conversion stayed at 13. That’s… the same.
Basically, our branded conversions came back. That’s great, but it’s not a result of our optimization efforts.
2. New landing pages, same performance.
Campaign #10’s performance in month #2 seems to be a small fluke. It’s still a good campaign – just not as good as initial statistics implied. Nothing about the change in landing pages indicates better or worse performance.
Month #4: Ad Scheduling
Ad scheduling shouldn’t be the your first optimization tactic. It can lead to huge improvements, but keyword optimization should come first.
The problem in this situation is that our search lost impression share due to budget doesn’t give us much wiggle room. If we start shutting off high volume keywords our total spend will drop quickly. Seeing how we’re targeting a small number of keywords with very little room, our keywords ultimately fall into one of three categories:
- Keywords that are converting
- Keywords that are driving a lot of traffic, but aren’t converting
- Keywords that aren’t driving enough traffic to accurately determine if they should be removed
For this reason, ad scheduling seemed like the next logical step.
Day of the Week
To try and get enough data, here are the numbers from the last three months.
Hour of the Day
Trends were also found by hour of the day.
Roughly 80% of our budget was spent between 8:00am and 8:00pm. This is when we saw nearly all of our conversions.
Between 8:00pm and 8:00am, 20% of our budget was spent for two total conversions- in other words it wasn’t worth advertising during the night.
The difference in CPA between peak times and poor times was so large that we chose to shut down ads between 8:00pm and 8:00am entirely. With a change that once again limited targeting, we increased our geotargeting to expand our reach.
This is what happened.
Surprisingly, our average cost per click CPC dropped. This isn’t what you would expect from allocated money to more competitive hours. It’s very possible our competitors haven’t looked at their ad scheduling data.
1. Conversions changed slightly.
The total number of conversions increased because branded conversions increased. Non-branded conversions actually decreased by one.
Ultimately, the data set is too small to say much of anything.
2. Ad scheduling made no clear changes to conversions.
I believe our decision to change when the ads run was correct and is saving the client money. It didn’t increase non-branded conversions at this time, but it very well may in the future.
In Summary and going forward
This account is now performing pretty well and will continue to improve over time. The challenge in this situation was we had small amounts of statistically information and were making big decisions on small data samples. Now that a decent amount of our impression share is limited by budget, we can start shutting down keywords that aren’t performing.
We do our best for clients with the data Google provides.