POEM BY NARI
visual poetry from the cyberstream
 
INDUCTIVE
LOGIC
PROGRAMMING
 
 
 
ABSTRACT
 
The inability to invert implication between clauses limits the completeness of inverse resolution and rlggs since -subsumption is used in place of clause implication in both.
 
MACHINE LEARNING GROUP
 

InductiveLogicProgramming-TheoryIntroductionThispageprovide s
anoutlineofthefundamentalconceptsandtheoryofILP.Theareasco ve
redare:InductiveInferenceInverseResolutionRelativeLeastGe ner
alGeneralisation(rlgg)InverseImplication(II)InverseEntai lmen
t(IE)U-LearnabilityCurrentResearchIssuesInductiveInfere nceIn
ductiveinferenceis,inasense,theinverseofdeduction.Howe ver,de
ductiveinferenceproceedsbyapplicationofsoundrulesofin ference
,whileinductiveinferencetypicallyinvolvesunsoundconj ecture.D
eductiveinferencederivesconsequencesEfromapriortheo ryT.Simil
arly,inductiveinferencederivesageneralbeliefTfroms pecificbel
iefsE.InbothdeductionandinductionTandEmustbeconsi stentandWit
hinILPitisusualtoseparatetheaboveelementsintoexa mples(E),bac
kgroundknowledge(B),andhypothesis(H).Thesehavet herelationshi
pB,HandEareeachlogicprograms.Eusuallyconsistso fgroundunitcla
usesofasingletargetpredicate.Ecanbeseparatedi ntopositiveexam
ples(E+),asgroundunitdefiniteclausesandnegat iveexamples(E-),
groundunitheadlessHornclauses.However,these parationintoB,Han
dEisamatterofconvenienceInverseResolutionB othLogicProgrammin
gandILParebuiltuponRobinson'sseminalworko nResolutionTheoremP
roving.Hedemonstratedthatdeductiveinfere nceinthefirstorderpr
edicatecalculuscouldbeeffectedbythesing leResolutionruleofinf
erence.Thisformsabasisfortheprogrammin gsystemProlog.Asingler
esolutionstepisshownbelowInductiveinf erencebasedoninvertingr
esolutioninpropositionallogicwastheb asisoftheinductiveinfere
nceruleswithintheDucesystem.Inducti veinferencerulesDucehadsi
xinductiveinferencerules.Fourofthe sewereconcernedwithdefinit
eclausepropositionallogic.Inthefo llowingdescriptionoftheinfe
renceruleslower-caselettersrepre sentpropositionalvariablesan
dupper-caselettersrepresentconj unctionsofpropositionalvariab
les.Duce'sinferencerulesinvert single-depthapplicationsofreso
lution.Usingtherulesasetofres olution-basedtreesforderivingth
eexamplescanbeconstructedbac kwardsfromtheirroots.Thesetoflea
vesofthetreesrepresentatheo ryfromwhichtheexamplescanbederive
d.Intheprocessnewpropositi onsymbols,notfoundintheexamples,ca
nbe''invented''bytheintra -andinter-constructionrules.Inverse
ResolutioninFirstOrderLo gicInverseresolutionwasliftedtofirst
-orderpredicatecalculus .Thisinvolvedalgebraicinversionofthee
quationsofresolutionbe low.Figure1showsaresolutionstep.During
adeductiveresolutions tepDisderivedatthebaseofthe'V'giventhec
lausesonthearms.Inco ntrast,a'V'inductiveinferencestepderives
oneoftheclausesonth earmofthe`V'giventheclauseontheotherarman
dtheclauseatthebas e.InFigure1theliteralresolvedispositive(+)
inCandnegative(-) inC'.Duce'sabsorptionruleconstructsC'fromCa
ndD,whiletheiden tificationrulederivesCfromC'andD.Sincealgebr
aicinversionofr esolutionhasacomplexnon-deterministicsolution
onlyarestricte dformofabsorptionwasimplementedinCigol(logiCba
ckwards).Howe ver,thereisauniquemost-specificsolutionfor`V'in
ductiveinfer encerules.ThatiswhereissuchthatRatherthaninverti
ngtheequati onsofresolutionwemightconsiderresolutionfromthemo
del-theore ticpointofview.ThatisApplyingthedeductiontheoremgi
vesadeduc tivesolutionforabsorption.Thisisaspecialcaseofinver
tingimpl ication.RelativeLeastGeneralGeneralisation(rlgg)Onec
ommonly advocatedapproachtolearningfrompositivedataisthatofta
kingre lativeleastgeneralgeneralisations(rlggs)ofclauses.Inth
elate 1960sReynoldsandPlotkininvestigatedtheproblemoffindingl
east generalgeneralisations(lggs)ofatoms.Theworkfocusedonthei
mpo rtanceofRobinson'sunificationtodeduction,andsearchedforan
an alogueusefulininduction.Lggswerefoundtobe,insomesense,anin
v ersetounification.Plotkinextendedtheinvestigationtoclauses.

H ethenwentontodefinethelggoftwoclausesrelativetoclausalbackg
i ch-subsumesanarbitraryclauseD,thisisnotthecaseforclausesCwh
a ndEeachbeingsingleHornclauses.Thiscannowbeseenasageneralise
e figurebelowshowstheeffectE={e1,..,em}hasontheprobabilitiesa
l clauses,C,D,whereC+,C-,andD+,D-bethesetsofpositiveandnegati
s aprocessofmatchingsub-termsinDtoproduceC.Ithasbeendemonstra
e searchstrategiesusedincurrentILPsystems.Built-insemantics.N
e poorlyhandledbymostILPsystem.Thisisparticularlyimportantint
l seachinstancex_iintheseries[x_1,..,x_m]withTrueifTentailsx_
e riesoflabelledinstances[e_1,e_2,...,e_m],aTuringmachinelear
e thelearnersuggestshypothesisH_mwithexpectederrorlessthanefo
a rnedsincerlgg_B(E)isasingleclause.TheILPsystemGolemwasdesig
c lauseD(or)thenC-]D.However,healsonotesthatC-]Ddoesnotimply.
s ureofT,suchthat-subsumesC.InparticularitisshownthatLee'ssub
v ertingimplication,thoughIdestam-Almquist'stechniqueiscomple
l yDalso-subsumealogicallyequivalentclauseD'.Uptorenamingofva
i ty,andisanalternativetoPAC-Learnability.U-Learnabilitybette
o thesesHwhichexplainsallthedata,p(H|E)willincreasemonotonica
l tsforthepopulardecisiontreelearningprogram,CART.CurrentRese
i cates.SamplingIssues.LargeDataSets.Incrementallearningsyste
i sticaltestsforsignificancebreakdownwhenlearningfromsmalldat
a sets.ILPsystemsneedtodemonstratehighpredictiveaccuracywiths
L .AnothernotableLemmawasprovedbyLee.ThisstatesthataclauseTim
s atargetconceptaccordingtoaprobabilitydistributionoverthecon
. Moreformally,theteacherstartsbychoosingdistributionsFandGfr
a rchIssuesThissectionprovidesabreifoutlineofthereseachareast
t tempttosolvetheinvertingimplicationproblem.Sub-unificationi
t edthatsub-unificationisabletoconstructrecursiveclausesfromf
) suchthatNote,ingeneralB,HandEcouldbearbitrarylogicprograms.
o mthefamilyofdistributionsoverconceptdescriptionsH(wffswitha
e seriesofteachingsessions.IneachsessionatargettheoryTischose
R elevance.Whenlargenumbersofbackgroundpredicatesareavailable
e werexamplesthanwouldberequiredbyILPsystemssuchasGolemandFOI
l edge.Plotkinshowedthatwithunrestricteddefiniteclausebackgro
i onbetweenclauses.Hedemonstratedthatforanytwonon-tautologica
r iableseveryclauseDhasatmostonemostspecificformofD'inthe-sub
s umptionlattice.D'iscalledtheself-saturationofD.However,ther
e Entailment(IE)ThegeneralproblemspecificationofILPis,givenba
h esisH(wheresimplicityismeasuredrelativetoapriordistribution
E achclauseinthesimplestHshouldexplainatleastoneexample,since
f U-learnabilitythatdistinguishesitfromPAC-learnabilityare:1.
g nprobabilitiestopotentialtargetconcepts.2.Average-casesampl
s sociatedboundsfortimetakentotestentailment)andinstancesX(gr
o undwffs)respectively.TheteacherusesFandGtocarryoutaninfinit
n fromF.EachTisusedtoprovidelabelsfrom(True,False)forasetofin
e xplains{e_1,...,e_i}.H_imustbesuggestedbyLinexpectedtimebou
s sociatedwiththepossiblehypotheses.U-leanabilitymaybeinterpr
p robabilitiesofhypothesesbeforeconsiderationofexamplesE.Theh
r obabilitiesofhypothesesinthatexplaintheexamples.Theconditio
l dberemoved,orunfoldedwithrespecttodeep-structuredtheories.I
i thgeneratingasingleclause.Researchisrequiredintoimprovedper
s temsperformpoorlyinthepresenceofrelevantlongchainsofliteral
s ,connectedbysharedvariables.Recursion.Recursivehypothesesar
c omplexclauseswhenencodedasliterals.Thesepresentproblemstoth
f ILPsystemsinthedatabasediscoverydomain.Constraints.ILPsyste
l iabilityestimateswhenexactgeneralisationsarenotpossible.Mac
c kgroundknowledgeBandexamplesEfindthesimplestconsistenthypot
a tesforHcanbefoundbyconsideringallclauseswhich-subsumesub-sa
T hedescendingdottedlineintheFigurerepresentsaboundontheprior
d byrepeatedlyself-resolvingC.Thusthedifferencebetween-subsum
o therwisethereisasimplerH'whichwilldo.ConsiderthenthecaseofH
r matchesthepracticalgoalsofmachinelearning.Themajorfeatureso
e tedfromaBayesianperspective.Thefigureshowstheeffectofaserie
b ilityp(H)=F(H)foraHtakenfromthesetofallhypothesesmeasuredal
r whichHentailsxandTdoesnotentailx).Thisapproachtolearningfro
u ndknowledgeBtheremaynotbeanyfiniterlgg_B(E).Extensionalback
p liesclauseCifandonlyifthereexistsaclauseDintheresolutionclo
c hsingleclauses,theirnegationwillbelogicprogramsconsistingon
) conjunctionofgroundliteralswhicharetrueinallmodelsof.Sincem
y potheseswhichentailandareconsistentwiththeexamplesaremarked
l lywithincreasingE.U-learnabilityhasdemonstratedpositiveresu
n vention.Furtherresearchisrequiredintopredicateinvention.Com
s traintsthatcanbeused.Probabilities.ILPsystemslacktheability
o fclauseimplicationinboth.PlotkinnotedthatifclauseC-subsumes
m requiretheabilitytolearnandmakeuseofgeneralconstraints,rath
u ctedbyGolemwereforcedtohaveonlyatractablenumberofliteralsby
i chimplyD.Thisisknownastheproblemofinvertingimplicationbetwe
o n)LetC,Dbeclauses.C-]DifandonlyifeitherDisatautologyorC-sub
m inacy.Idestam-Almquist'suseoflggsuffersfromthestandardprobl
n shownthatforcertainrecursiveclausesD,alltheclausesCwhichimp
t hesessionsmislessthanafixedpolynomialfunctionof1/dand1/e.Th
a sverticalbars.ThepriorprobabilityofE,p(E),issimplythesumofp
, determiningwhichpredicateisrelevant.Revision.Howclausesshou
n edtolearnbycreatingrlggs.Golemusedextensionalbackgroundknow
y unknown,thenworstcaseanalysismustbeusedasinPAC-learnability
r thelabelofanyx_m+1chosenrandomlyfromG.[F,G]issaidtobeU-lear
e riencedwithlearningfromallexamplesatonce.SmallDataSets.Stat
g roundknowledge.SupposeBandEconsistofnandmgroundunitclausesr
b e(n+1)^m,makingtheconstructionintractableforlargem.Multiple
l edgetoavoidtheproblemofnon-finiterlggs.Extensionalbackgroun
d knowledgeBcanbegeneratedfromintensionalbackgroundknowledgeB
a usetheoriesthatwereij-determinate.InverseImplication(II)The
t eforarestrictedformofentailmentcalledT-implication.Ithasbee
e existdefiniteclauseswhichhavenofiniteself-saturation.Invers
t urantsof.U-LearnabilityU-Learnabilityisanewmodeloflearnabil
T heuseofprobabilitydistributionsoverconceptclasses,whichassi
e s,andlabelstheexamplesaccordingtothechosentarget.Ingeneral,
s EwheneveritbothentailsandisconsistentwithE.Onthebasisofthes
h atwillextendcurrentILPtheoryandsystems.BackgroundKnowledge.
t eexpressprobabilisticconstraints.Thiseffectstheperformanceo
m smayexperienceimprovedefficiencyoflearningusingbuilt-inpred
m positivedatahasthefollowingproblems.Arbitrarybackgroundknow
o lutionsteps.Theparameterhisprovidedbytheuser.Therlggsconstr
e nclauses.Gottlobprovedanumberofpropertiesconcerningimplicat
b )useamixtureofinverseresolutionandlggtosolvetheproblem.Thee
u stbetrueineverymodelofitmustcontainasubsetofthegroundlitera
t hesedistributionsmaybeknown,completelyunknown,orpartiallykn
n dedbyafixedpolynomialfunctionofi.Theteacherstopsasessiononc
a nychoiceofdande(0[d,e=[1)withprobabilityatleast(1-d)inanyof
p lexTheories.Multi-clause.MostpresentILPsystemsareconcernedw
e complexityandtimecomplexityrequirements,ratherthanworst-cas
t ticewiththefewestpossibleerrorsofcommission(instancesxinXfo
s umptionlemmahasthefollowingcorollary.(Implicationandrecursi
n ableifandonlyifthereexistsaTuringmachinelearnerLsuchthatfor
o ngtheY-axis,wherethesumofallprobabilitesofthehypothesesis1.
e rthanrequiringlargenumbersofgroundnegativeexamples.Built-in
r oundknowledgeB.Assumethatoneisattemptingtolearntargetconcep
c lausehypothesis.Targetconceptswithmultipleclausescannotbele
' bygeneratingallgroundunitclausesderivablefromB'inatmosthres
e ssofinverseresolutionandrlggssince-subsumptionisusedinplace
p tionandimplicationbetweenclausesCandDisonlypertinentwhenCca
x tendedinverseresolutionmethodsuffersfromproblemsofnon-deter
d formofabsorptionandrearrangedsimilarlytogiveSinceHandEareea
l yofgroundskolemisedunitclauses.Letbethe(potentiallyinfinite
l sin.ThereforeandsoforallHAsubsetofthesolutionsforHcanbefoun
d byconsideringtheclauseswhich-subsume.Thecompletesetofcandid
c eptclass.Theteacherthenchoosesexamplesrandomly,withreplacem
i andFalseotherwise.AnhypothesisHissaidtoexplainasetofexample
n erLproducesasequenceofhypotheses[H_1,H_2,...H_m]suchthatH_i
s ofexamplesontheprobabilitesassociatedwithhypotheses.Thelear
n alprobabilityp(E|H)is1inthecasethatHexplainsEand0otherwise.
p redicates.Somepredicatesarebestdefinedprocedurally.ILPsyste
i nabilitytoinvertimplicationbetweenclauseslimitsthecompleten
v eliteralsofclausesCandDrespectively,thenC-]DimpliesthatC+-s
s umesDorthereisaclauseEsuchtha tE-subsumesDwhereEisconstructe
n self-resolve.Attemptsweremade toa)extendinverseresolutionand
e mofintractablylargeclauses.Bo thapproachesareincompleteforin
e requirements.IntheU-learnabiltymodel,ateacherrandomlychoose
e nt,accordingtoaprobabilitydistributionoverthedomainofexampl
o wntothelearner.Inthecasewherethesedistributionsarecompletel
s tancesrandomlychosenaccording todistributionG.Theteacherlabe
f ormanceofmultipleclausegenera tion.DeepTheories.CurrentILPsy
u mbers.ILPsystemshavesevereres trictionsontheformofnumericcon
e spectively.Intheworstcasethenumberofliteralsinrlgg_B(E)will
u bsumesD+andC--subsumesD-.Sub-unificationhasbeenappliedInana
n er'shypothesislanguageislaidoutalongtheX-axiswithpriorproba
h enaturallanguagedomain.Struct ure.Structuralconceptsresultin
H =rlgg_B(E)willbethehypothesis withintherelativesubsumptionla
A lthoughefficientmethodsarekno wnforenumeratingeveryclauseCwh
T heposteriorprobabilityofHisnowgivenbyBayestheoremasForanhyp
t T,fromexamplesE={x_1,x_2,...,x_m}.Givenbackgroundknowledge,
a mlltrainingsets.Reliability.ExtendingILPsystemstoindicatere
r equiringthatthesetofpossiblehypothesescontainonlydefinitecl
m smaybemoreeffectivethanbatchsystems,wheredifficultiesareexp
h ineLearningGroupHomePagewww@comlab.ox.ac.uk.............PbN

INDUCTIVE
LOGIC
PROGRAMMING
MACHINE
 



  inverseresolution approaches
  Theories ofnumeric
  Structuralconcepts

REFERENCES
 

OXFORD UNIVERSITY
Computing Laboratory
Machine Learning Group
 
INDUCTIVE LOGIC PROGRAMMING: THEORY
www.comlab.ox.ac.uk/oucl/groups/machlearn/ilp_theory.html
 
I.C.S. REFERENCE LIBRARY
International Textbook Company
1897 - 1907
 
 
 

 
MACHINE PbN.9712 MACH_I0