1. Introduction
Understanding the available UK AI grants is only the first step. The harder task is converting a business idea into a credible grant application. Many founders fail not because their idea has no potential, but because they write the application in the wrong language. They describe the product as if they are selling to customers, instead of explaining the innovation as if they are justifying public investment.
A customer wants to know whether the product solves their problem. A grant assessor wants to know whether the project is innovative, risky, feasible, economically valuable, responsibly governed and suitable for public funding. This difference is critical. A good commercial pitch can still be a weak grant application if it does not explain novelty, technical risk, market evidence, value for money, project management and wider UK benefit.
Innovate UK is the UK’s national innovation agency and supports business-led innovation across sectors, technologies and regions (Innovate UK, 2026a). For most Innovate UK competitions, founders apply through the Innovation Funding Service, where open competitions, application forms and guidance are published (Innovate UK, 2026b). Public grant programmes such as Frontier AI Discovery, AI Champions and related innovation competitions usually require applicants to follow a structured process, answer scored questions, justify costs and provide evidence that the project is deliverable.
This section explains the practical side of grant applications. It covers registration, assessment, scored questions, matched funding, project costs, responsible AI, evidence packs, common mistakes and how founders can prepare stronger applications.
2. The First Rule: Apply to the Right Grant
Before writing anything, founders must check whether the opportunity actually fits the project. This sounds obvious, but it is one of the most common reasons applications fail.
A project may involve AI but still not fit a frontier AI grant. A project may involve data but still not be a Sovereign AI strategic asset. A project may help businesses adopt AI but still not fit BridgeAI unless it is linked to a priority sector or a relevant programme call.
The first screening question should be:
Is this grant designed for the type of innovation I am actually building?
For example, a startup building a chatbot on top of an existing large language model may be commercially useful, but it is unlikely to fit a frontier AI competition unless there is a genuinely novel model architecture, training method, benchmark or technical capability. By contrast, the same startup may fit a practical AI adoption programme if it solves a defined sector problem and improves productivity.
The second screening question should be:
Can I prove the project is in scope using the funder’s own language?
For Frontier AI Discovery, the official page describes the competition as supporting feasibility studies for frontier AI and foundation models, with projects expected to develop novel AI and ML technologies and prepare for potential larger Phase 2 collaborative R&D projects (Innovate UK, 2026c). For BridgeAI, the programme focuses on responsible AI adoption in sectors such as agriculture, construction, creative industries, transport, logistics and warehousing (Innovate UK Business Connect, 2026). For Sovereign AI Strategic Assets, the programme focuses on high-value AI datasets and autonomous or automated laboratory infrastructure (Sovereign AI Fund, 2026).
A strong application repeats the funder’s priorities back in a specific, evidence-based way. A weak application tries to force a general startup idea into the wrong funding category.
3. Registration and the Innovation Funding Service
Many Innovate UK competitions are managed through the
Innovation Funding Service. This is the portal where applicants can find competitions, create applications, invite collaborators where allowed, answer questions, upload information and submit proposals. The Innovation Funding Service lists competitions and acts as the main application route for Innovate UK opportunities
(Innovate UK, 2026b).
For a founder, the practical steps are usually:
- Find the relevant competition.
- Read the full competition guidance.
- Check eligibility.
- Register or sign in to the Innovation Funding Service.
- Start the application.
- Complete all scored and non-scored questions.
- Add project costs.
- Add collaborators if the competition allows or requires them.
- Upload supporting documents if requested.
- Submit before the deadline.
Deadlines are strict. If the competition closes at 11:00am, a submission at 11:01am is normally too late. Founders should not wait until the last hour. Grant portals can time out, documents may fail to upload, collaborator approvals may be incomplete, or budget details may need correction.
For the
Frontier AI Discovery competition, the official page states that it opened on
14 April 2026 and closes on
10 June 2026 at 11:00am. The page also provides the application route and supporting information
(Innovate UK, 2026c).
4. Eligibility: Do Not Skip the Boring Part
Eligibility is not a formality. A project can be technically excellent and still fail if it is ineligible. Innovate UK’s guidance explains that it can decide whether a proposal is in or out of scope or eligible for funding, and projects considered out of scope or ineligible will not be funded (Innovate UK, 2026d).
Founders should check:
- organisation type;
- UK registration requirements;
- business size;
- project start and end dates;
- project cost range;
- collaboration rules;
- subsidy rules;
- eligible activities;
- geographic requirements;
- excluded activities;
- whether the project must be led by a business;
- whether academic partners are allowed;
- whether subcontractors are allowed;
- whether previous applications affect eligibility.
This is especially important because different AI opportunities have different rules.
For example,
AI Champions: Frontier AI Phase 1 was open to UK registered SME businesses, had eligible project costs between
£150,000 and £250,000, and stated that up to
70% of costs could be covered depending on business size
(Innovate UK Business Connect, 2026a). Frontier AI Discovery has smaller eligible project costs of
£25,000 to £50,000, while Sovereign AI Strategic Assets is much larger at
£1 million to £9 million.
Therefore, founders must not copy assumptions from one grant into another. Each competition must be read separately.
5. How Assessment Usually Works
Many Innovate UK applications are assessed by independent assessors. The application is scored against the competition’s published questions and criteria. Innovate UK’s assessor guidance documents commonly explain that assessors review answers for each scored question and mark them between
1 and 10, where 1 is the lowest and 10 is the highest
(Innovate UK, 2023).
Innovate UK also states that it engages assessors to act on its behalf but retains the right to make final decisions on scope, eligibility and funding
(Innovate UK, 2026d). This means that the assessor score is important, but it is not the only thing that matters. Portfolio balance, budget limits, eligibility and strategic fit can also affect outcomes.
Founders should understand that assessors are not reading the application as fans of the startup. They are judging whether the written evidence answers the question. If the question asks for market evidence, the founder must provide market evidence. If the question asks for technical risk, the founder must explain technical risk. If the question asks for project costs, the founder must justify project costs.
A strong grant application therefore uses a disciplined structure:
- answer the question directly;
- use headings;
- provide evidence;
- quantify where possible;
- avoid unsupported claims;
- connect the answer to the competition scope;
- explain risks honestly;
- show why the project needs public funding.
6. The Difference Between Marketing Language and Grant Language
Marketing language is designed to create excitement. Grant language is designed to prove credibility.
A marketing sentence might say:
Our AI platform will revolutionise startup success.
A grant sentence should say:
The project will test whether structured AI-assisted evidence scoring improves the completeness, reliability and actionability of early-stage business viability assessments compared with generic large language model outputs.
The second sentence is stronger because it defines the mechanism, comparison point and measurable outcome.
A marketing sentence might say:
We are building the future of business planning.
A grant sentence should say:
The project addresses a specific evidence-quality gap in early-stage business planning by combining structured founder inputs, market research data, competitor analysis, financial assumptions and responsible AI review into a staged decision-support workflow.
Again, the second version is stronger because it explains the problem and the innovation.
Grant writing should still be persuasive, but it must be evidence-led. The founder should avoid empty phrases such as:
- game-changing;
- revolutionary;
- world-class;
- AI-powered;
- disruptive;
- scalable;
- unique;
- innovative;
- next generation.
These phrases are not bad by themselves, but they are weak without evidence. If the founder says “unique,” they must prove uniqueness. If they say “scalable,” they must explain how scaling happens. If they say “innovative,” they must identify the innovation.
7. The Core Questions Every AI Grant Application Must Answer
Although each competition has its own form, most AI grant applications need to answer the same underlying questions.
7.1 What Problem Are You Solving?
The problem must be specific. A general statement such as “businesses need AI” is weak. A stronger statement defines the user, pain point and consequence.
Example:
SME warehouses often rely on manual supervisor judgement to forecast daily workload. This causes bottlenecks, overtime costs and delayed dispatch when order volume, delivery schedules and staffing availability change quickly.
This is stronger because it identifies a real operational pain.
7.2 Why Is the Problem Important?
The application should explain why the problem matters economically, socially, strategically or scientifically. This may include cost, productivity, safety, competitiveness, public value, regional growth or national capability.
7.3 What Is Innovative?
The founder must explain what is new compared with current alternatives. For AI grants, this may be technical novelty, workflow novelty, data novelty, evaluation novelty, responsible AI novelty or sector implementation novelty.
7.4 Why Is AI Needed?
AI should not be used simply because it sounds attractive. The application should explain why AI is suitable for the problem. Does the project require prediction, classification, optimisation, natural language processing, pattern detection, image analysis or decision support?
7.5 What Evidence Supports the Market Need?
Evidence could include customer interviews, pilots, letters of support, sector data, user testing, paid trials, waiting lists, procurement conversations or market research.
7.6 What Will the Project Deliver?
The application should define deliverables clearly. For example:
- prototype;
- feasibility report;
- benchmark results;
- dataset;
- pilot implementation;
- responsible AI framework;
- commercialisation plan;
- technical specification;
- Phase 2 consortium plan.
7.7 What Are the Risks?
Public innovation funding exists partly because innovation involves risk. A founder should not pretend there is no risk. Instead, they should identify and manage risk.
Risks may include:
- technical performance risk;
- data availability risk;
- adoption risk;
- regulatory risk;
- ethical risk;
- security risk;
- integration risk;
- financial risk;
- partner dependency;
- market timing risk.
7.8 What Is the UK Benefit?
The application should explain how the project benefits the UK. This may include job creation, productivity, exports, IP, regional growth, supply chain improvement, scientific benefit, sector resilience or national AI capability.
8. Matched Funding and Grant Intensity
A major misunderstanding among founders is the belief that a grant automatically pays for the whole project. In many commercial innovation grants, the applicant must fund part of the project costs.
General Innovate UK guidance explains that grant levels vary. It states that small and medium-sized businesses can receive funding of up to
67% of project costs, while larger businesses can receive up to
50%, and applicants need to fund the remaining costs themselves
(GOV.UK, 2014). Specific competitions may use different rates. For example, AI Champions stated that up to
70% of costs could be covered depending on business size
(Innovate UK Business Connect, 2026a).
This means the founder must understand the actual cash requirement.
For example, if a project costs £50,000 and the grant covers 70%, the grant contribution would be £35,000. The company would need to fund the remaining £15,000.
If a project costs £250,000 and the grant covers 70%, the grant contribution would be £175,000. The company would need to fund £75,000.
If a commercial Sovereign AI project costs £2 million and the grant covers 50%, the applicant or consortium may need to provide £1 million in matched funding.
This is why financial readiness matters. Founders must show that they can pay their share, manage the project, evidence costs and survive the cashflow cycle.
9. What Costs Can Usually Be Included?
Eligible costs depend on the competition, but typical innovation project cost categories may include:
- staff costs;
- subcontractor costs;
- materials;
- equipment usage;
- travel and subsistence where eligible;
- overheads;
- software or data costs where allowed;
- academic partner costs;
- project management.
The important principle is that costs must be necessary, reasonable and directly linked to the project. A founder should not include general company expenses that are not part of the funded work.
For an AI project, eligible costs might include:
- AI engineer time;
- data scientist time;
- product manager time;
- domain expert time;
- responsible AI review;
- cloud compute for experiments;
- dataset preparation;
- prototype development;
- technical testing;
- user testing;
- evaluation and benchmarking;
- project management.
Weak cost descriptions reduce credibility. A line such as “AI development: £40,000” is too vague. A stronger budget explains what work will be done, who will do it, how long it will take, and why the cost is reasonable.
10. The Evidence Pack Founders Should Prepare
Before writing the application, founders should prepare a grant evidence pack. This is not always submitted as one document, but it supports stronger answers.
10.1 Technical Evidence
This may include:
- system architecture;
- prototype screenshots;
- model approach;
- data flow;
- benchmark plan;
- technology stack;
- current limitations;
- experimental design;
- responsible AI controls;
- security architecture.
10.2 Market Evidence
This may include:
- customer interviews;
- user pain-point analysis;
- competitor analysis;
- letters of support;
- pilot agreements;
- pricing assumptions;
- procurement evidence;
- sector reports.
10.3 Commercial Evidence
This may include:
- business model;
- route to market;
- sales strategy;
- pricing model;
- customer segments;
- revenue forecast;
- adoption barriers;
- partnership strategy.
10.4 Financial Evidence
This may include:
- project budget;
- cashflow forecast;
- matched funding evidence;
- staff cost calculations;
- subcontractor quotes;
- equipment quotes;
- grant claim planning.
10.5 Responsible AI Evidence
This may include:
- data protection approach;
- privacy statement;
- bias mitigation;
- explainability plan;
- human oversight;
- failure mode analysis;
- user safety;
- governance process.
10.6 UK Benefit Evidence
This may include:
- expected jobs;
- regional impact;
- productivity gains;
- export potential;
- IP ownership;
- supply chain benefits;
- sector competitiveness;
- wider ecosystem value.
A founder who prepares these materials before writing will produce a much stronger application.
11. Responsible AI as a Core Requirement
Responsible AI is no longer optional. AI grant applications must show that the founder understands risks and has a plan to manage them.
For many AI startups, responsible AI should include:
- human oversight;
- transparent limitations;
- data privacy;
- fairness and bias checks;
- explainability;
- security;
- model monitoring;
- user consent;
- audit logs;
- safe failure modes;
- protection against misuse;
- compliance with relevant law.
The importance of responsible AI is especially clear in sectors such as health, finance, construction, employment, education, defence and public services. However, it also matters for ordinary startup tools. If an AI system gives business advice, legal-readiness guidance or financial planning support, users may rely on it. The system must be designed to show uncertainty, encourage review and avoid false confidence.
For Dhruvi Infinity Inspiration, a strong responsible AI position could include:
The platform does not present AI outputs as final professional advice. It structures founder thinking, highlights assumptions, flags uncertainty, encourages evidence collection and supports human review. The system will include disclaimers, audit trails, editable outputs, source-aware prompts, risk flags and responsible AI guidance for users.
This is stronger than simply saying “we use ethical AI.”
12. Writing the Innovation Section
The innovation section is often the heart of the application. It should answer:
- What is new?
- Why is it hard?
- What is risky?
- What is better than current alternatives?
- What will be learned?
- How will success be measured?
For an AI Startup Builder platform, a weak innovation section might say:
Our platform uses AI to generate business plans faster than humans.
This sounds like a basic automation tool.
A stronger innovation section might say:
The innovation is a structured AI-assisted founder decision-support workflow that combines staged business-development frameworks, evidence scoring, contextual retrieval, uncertainty detection and human-in-the-loop review. The project will test whether this approach improves the reliability, completeness and actionability of business viability assessments compared with generic LLM-generated advice.
This is stronger because it identifies a method, not just a feature.
The founder should also explain the state of the art. For example:
Existing AI business-planning tools often generate static text outputs from short prompts. They may lack structured evidence capture, continuity across business-development stages, assumption tracking, benchmarked output evaluation and responsible AI safeguards. This creates a risk of generic, overconfident or poorly evidenced recommendations.
This explains the gap.
13. Writing the Market Section
The market section should avoid exaggerated total market claims. Assessors have seen many applications saying “the global AI market is worth billions.” That is rarely enough.
A stronger market section defines:
- specific customer segment;
- problem;
- buying trigger;
- budget holder;
- route to market;
- evidence of demand;
- adoption barriers;
- competition;
- pricing logic.
For example:
The initial target customers are UK early-stage founders, startup incubators, university enterprise teams and business support organisations that need structured support for idea validation, market analysis, financial planning and funding readiness. Evidence of demand will be gathered through pilot users, founder interviews, conversion data from public checkers and letters of support from entrepreneurship support organisations.
This is more credible than saying “all entrepreneurs need business planning.”
14. Writing the Project Plan
The project plan should be clear and realistic. A useful structure is:
Work Package 1: Discovery and Requirements
Define user needs, technical requirements, data requirements and responsible AI requirements.
Work Package 2: Prototype Development
Build the minimum technical system needed to test the core hypothesis.
Work Package 3: Data and Evidence Layer
Develop data structures, evidence capture, scoring logic and governance.
Work Package 4: Testing and Benchmarking
Compare prototype outputs against baselines, expert review or user-defined success criteria.
Work Package 5: Market and Commercial Validation
Test customer interest, pricing, user adoption and route to market.
Work Package 6: Responsible AI and Risk Review
Assess bias, privacy, explainability, user safety, limitations and governance.
Work Package 7: Final Report and Scale-Up Plan
Produce technical findings, commercial findings, risk analysis and next-stage roadmap.
This structure helps assessors see how the project will move from idea to evidence.
15. Writing the Risk Section
A strong risk section is honest. It does not pretend that everything will work. It shows that the team understands uncertainty and can manage it.
Example risk table:
Risk Impact Mitigation
AI outputs are too generic | High | Use structured prompts, evidence scoring, domain-specific context and expert review
Users overtrust AI advice | High | Add uncertainty flags, editable outputs, disclaimers and human review prompts
Data quality is inconsistent | Medium | Standardise inputs, validate fields and add completeness scoring
Founder adoption is low | Medium | Run pilot testing, simplify onboarding and use guided workflows
Technical integration takes longer than expected | Medium | Limit MVP scope and prioritise core validation features
Matched funding pressure | High | Confirm available cash and reduce project scope if needed
This kind of table is assessor-friendly because it is specific and practical.
16. Writing the Value for Money Section
Value for money does not mean “cheap.” It means the public funding is justified by the expected benefit.
A value-for-money answer should explain:
- why the project needs grant support;
- why the costs are reasonable;
- what public benefit is created;
- what private investment or matched funding is contributed;
- what happens if the grant is not awarded;
- what economic impact may result.
For example:
Public funding is justified because the project involves technical and market uncertainty around structured AI-assisted founder decision support. The grant will allow the company to test a benchmarked prototype, responsible AI controls and customer adoption before larger commercial investment. The expected benefit is improved quality of early-stage business planning, stronger founder readiness and potential productivity gains for UK startup-support organisations.
This is stronger than saying “we need money to build features.”
17. Writing the Team Section
The team section should show capability. It should answer:
- Who will deliver the project?
- What experience do they have?
- What gaps exist?
- Who will fill those gaps?
- Why is this team credible?
For a technical AI project, assessors may expect AI/ML expertise, software engineering, domain knowledge, commercial leadership and project management. If the founder lacks one area, they should explain how they will cover it through advisors, contractors, partners or future hires.
A founder should avoid pretending to have expertise they do not have. It is better to say:
The founding team has strong product and software development capability. The project will use an external responsible AI adviser and a part-time ML specialist to support benchmark design and technical validation.
This is more credible than overstating internal capability.
18. Common Reasons AI Grant Applications Fail
18.1 Wrong Grant Fit
The project does not match the scope.
18.2 Weak Innovation
The application describes normal product development, not innovation.
18.3 Too Much AI Hype
The proposal uses buzzwords without explaining technical or commercial substance.
18.4 No Evidence of Demand
The founder assumes customers want the product but provides no proof.
18.5 Weak State-of-the-Art Analysis
The application does not explain what exists already or why the proposed solution is better.
18.6 Poor Budget Justification
Costs are vague, inflated or not clearly linked to the work.
18.7 Weak Responsible AI
The project does not address bias, privacy, explainability, misuse or human oversight.
18.8 Unrealistic Timeline
The project tries to deliver too much in too little time.
18.9 No Matched Funding
The applicant cannot fund the non-grant share of costs.
18.10 Weak Commercialisation
The project may be technically interesting but has no credible route to market.
19. Practical Example: Rewriting a Weak AI Grant Idea
Weak Version
We are building an AI startup builder that helps people create business plans. It uses ChatGPT and will make entrepreneurship easier.
This is not strong enough for most serious grants. It sounds like an AI wrapper.
Stronger Version
The project will develop and test a structured AI-assisted startup development workflow that guides founders through idea validation, market analysis, competitor analysis, business model design, strategy, financial planning and legal readiness. The feasibility study will evaluate whether structured evidence capture, AI-assisted reasoning, uncertainty flags and human review improve the quality and reliability of founder decision-making compared with generic AI-generated business advice.
This version is stronger because it describes:
- the workflow;
- the technical method;
- the problem;
- the comparison point;
- the evaluation logic;
- the user benefit.
Strongest Version for Frontier AI-Style Positioning
The project will investigate a domain-specific AI reasoning and evidence-evaluation architecture for early-stage venture assessment. The system will combine structured founder inputs, retrieval-augmented context, staged business frameworks, assumption scoring and responsible AI controls. The feasibility study will benchmark the system against generic LLM outputs across completeness, evidence quality, hallucination risk, financial realism and actionability.
This version is stronger because it moves the claim from “AI writes content” to “AI reasoning and evidence evaluation.”
20. Grant Application Timeline for Founders
A realistic grant preparation timeline should begin weeks before the deadline.
6–8 Weeks Before Deadline
- Identify grant fit.
- Read guidance.
- Check eligibility.
- Define project scope.
- Contact partners.
- Create evidence pack.
4–6 Weeks Before Deadline
- Draft project plan.
- Build budget.
- Gather market evidence.
- Prepare technical description.
- Draft responsible AI section.
- Confirm matched funding.
2–4 Weeks Before Deadline
- Write full application.
- Review against scoring criteria.
- Ask external reviewers for feedback.
- Refine budget.
- Finalise partner details.
1 Week Before Deadline
- Complete portal fields.
- Check attachments.
- Confirm collaborators have accepted.
- Proofread answers.
- Submit early.
After Submission
- Prepare for possible interview or clarification if required.
- Continue building evidence.
- Keep partners engaged.
- Use the application as a strategic document even if unsuccessful.
21. Specific Grant Readiness for Dhruvi Infinity Inspiration
For Dhruvi Infinity Inspiration, grant readiness should be built in layers.
Layer 1: Product Evidence
The platform should collect evidence of user demand, usage, conversion, completion rates, founder problems and outcomes.
Examples:
- how many users complete Fast Checker;
- how many save an idea into Startup Builder;
- where users drop off;
- which tools users use most;
- how many founders complete market analysis or financial planning;
- which outputs users edit or reuse.
Layer 2: Technical Evidence
The product should document how AI is used responsibly and structurally.
Examples:
- prompt architecture;
- context control;
- JSON extraction;
- evidence scoring;
- output validation;
- AI workflow pipeline;
- hallucination mitigation;
- audit trails;
- user editability.
Layer 3: Business Evidence
The company should define the commercial model clearly.
Examples:
- free/pro/ultimate plan conversion;
- pricing logic;
- customer segments;
- founder pain points;
- competitor comparison;
- partnership routes;
- business support organisation use cases.
Layer 4: Responsible AI Evidence
The platform should show that it does not simply generate unchecked advice.
Examples:
- disclaimers;
- human review;
- editable outputs;
- uncertainty flags;
- source requirements;
- user responsibility prompts;
- legal and finance caution;
- privacy controls.
Layer 5: Grant-Specific Evidence
Depending on the grant, the company should adapt positioning.
For Frontier AI, focus on reasoning architecture, evidence evaluation and benchmarked validation.
For BridgeAI, focus on sector-specific AI adoption readiness.
For Sovereign AI, focus only on future strategic datasets or evidence infrastructure if the asset becomes credible and externally valuable.
22. Final Founder Checklist Before Submitting
Before submitting an AI grant application, the founder should be able to say “yes” to the following:
- The project fits the grant scope.
- The organisation is eligible.
- The project cost is within the allowed range.
- The deadline is realistic.
- The innovation is clearly explained.
- The current state of the art is understood.
- The customer problem is evidenced.
- The AI method is explained clearly.
- The project has measurable outcomes.
- The budget is justified.
- Matched funding is available.
- The team can deliver.
- Risks are identified and mitigated.
- Responsible AI is built into the project.
- UK benefit is specific and quantified where possible.
- The commercialisation plan is credible.
- The application answers each question directly.
- Evidence supports every major claim.
- The proposal avoids empty AI hype.
- The founder has submitted before the deadline.
If several of these are missing, the application may need more preparation.
23. Conclusion
Applying for UK AI grants in 2026 requires more than enthusiasm for artificial intelligence. It requires strategic fit, technical clarity, market evidence, responsible AI, financial readiness and disciplined project planning. The strongest founders understand that grant assessors are not buying a product; they are evaluating whether public funding should support a specific innovation project.
For Frontier AI Discovery, founders must prove that the project develops genuine frontier AI capability or prepares a credible route towards larger collaborative R&D. For Sovereign AI Strategic Assets, founders must show that they are creating an asset with wider value to the UK AI ecosystem. For BridgeAI, founders must show practical AI adoption in priority sectors with measurable productivity impact.
For Dhruvi Infinity Inspiration, the strongest grant strategy is not to describe the product as a generic AI business plan generator. The stronger direction is to frame it as a structured AI-assisted founder decision-support system that improves evidence quality, business-readiness assessment, responsible AI guidance and startup development workflows. Over time, this can create a more defensible and fundable innovation story.
A grant application should therefore become more than a funding request. It should become a strategic document that clarifies the company’s innovation, market, risks, evidence and future direction. Even if the grant is not awarded, the process can strengthen the business.
References
Porter, M. E. (1985) Competitive Advantage: Creating and Sustaining Superior Performance. New York: Free Press.