Files
bark-with-voice-clone/bark/__pycache__/model_fine.cpython-38.pyc

49 lines
5.4 KiB
Plaintext
Raw Normal View History

2023-04-09 13:21:02 -04:00
U
<00>p0dC<00>@s<>dZddlmZddlZddlZddlmZddlmZddl m
Z
m Z m Z Gdd<08>dej <0A>ZGd d
<EFBFBD>d
ej <0A>ZGd d <0C>d e
<EFBFBD>ZeGd d<0E>de <0B><03>ZdS)zc
Much of this code is adapted from Andrej Karpathy's NanoGPT
(https://github.com/karpathy/nanoGPT)
<EFBFBD>)<01> dataclassN)<01>
functional<EFBFBD>)<03>GPT<50> GPTConfig<69>MLPcs$eZdZ<02>fdd<02>Zdd<04>Z<04>ZS)<05>NonCausalSelfAttentioncs<>t<00><00><01>|j|jdkst<04>tj|jd|j|jd<03>|_tj|j|j|jd<03>|_ t<05>
|j <0B>|_ t<05>
|j <0B>|_ |j|_|j|_|j |_ ttjjd<04>o<>|j dk|_dS)Nr<00><00><01>bias<61>scaled_dot_product_attentiong)<12>super<65>__init__<5F>n_embd<62>n_head<61>AssertionError<6F>nn<6E>Linearr <00>c_attn<74>c_proj<6F>Dropout<75>dropout<75> attn_dropout<75> resid_dropout<75>hasattr<74>torchr<00>flash<73><02>self<6C>config<69><01> __class__<5F><00>)/Users/georg/code/bark/bark/model_fine.pyrs
<16>zNonCausalSelfAttention.__init__c
Cs |<01><00>\}}}|<00>|<01>j|jdd<02>\}}}|<06>|||j||j<00><04>dd<01>}|<05>|||j||j<00><04>dd<01>}|<07>|||j||j<00><04>dd<01>}|jr<>tj j
j |||d|j dd<05>}nD||<06>dd<07>dt <0A>|<06>d<07><01>} tj| dd<02>} |<00>| <09>} | |}|<08>dd<01><02><12><00>|||<04>}|<00>|<00>|<08><01>}|S) N<><00><01>dimrF)<03> attn_mask<73> dropout_p<5F> is_causal<61><6C><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>g<00>?)<15>sizer<00>splitr<00>viewr<00> transposerrrrr r<00>math<74>sqrt<72>F<>softmaxr<00>
contiguousrr)
r<00>x<>B<>T<>C<>q<>k<>v<>y<>attr"r"r#<00>forward"s,   <02>$
<18>zNonCausalSelfAttention.forward<72><06>__name__<5F>
__module__<EFBFBD> __qualname__rr><00> __classcell__r"r"r r#rs rcs$eZdZ<02>fdd<02>Zdd<04>Z<04>ZS)<05> FineBlockcs>t<00><00><01>t<02>|j<04>|_t|<01>|_t<02>|j<04>|_t |<01>|_
dS<00>N) r rr<00> LayerNormr<00>ln_1r<00>attn<74>ln_2r<00>mlprr r"r#rAs


zFineBlock.__init__cCs,||<00>|<00>|<01><01>}||<00>|<00>|<01><01>}|SrE)rHrGrJrI)rr5r"r"r#r>HszFineBlock.forwardr?r"r"r r#rD@s rDcs.eZdZ<02>fdd<02>Zdd<04>Zddd<07>Z<05>ZS) <09>FineGPTc s<>t<00><00><01><00>|`<02>|_<03>j|_t<05>tt<05><08>fdd<02>t <09>j<04>D<00><01>t<05>
<EFBFBD>j <0B>j <0C>t<05> <0A>j<0E>t<05><08>fdd<02>t <09>j<0F>D<00><01>t<05><10>j <0C>d<04><05>|_t<05><08>fdd<02>t <09>j|j<04>D<00><01>|_t |j<04>j<00>D]}|j|j|jj|d_q<>dS)Ncsg|]}t<00><01>j<02>j<03><02>qSr")r<00> Embedding<6E>input_vocab_sizer<00><02>.0<EFBFBD>_<>rr"r#<00>
<listcomp>Ws<02>z$FineGPT.__init__.<locals>.<listcomp>csg|] }t<00><00><01>qSr")rDrNrQr"r#rR^s)<05>wtes<65>wpe<70>drop<6F>h<>ln_fcs g|]}tj<01>j<02>jdd<01><03>qS)Fr
)rrr<00>output_vocab_sizerNrQr"r#rRcs<02>r)r r<00>lm_headr<00> n_codes_totalr<00>
ModuleDict<EFBFBD>dict<63>
ModuleList<EFBFBD>rangerL<00>
block_sizerrr<00>n_layerrF<00> transformer<65> n_codes_given<65>lm_heads<64>weightrS)rr<00>ir rQr#rOs0 
<08><04>

<EFBFBD><04>
 <0C><04>zFineGPT.__init__cs&<00>j}<03><00><01>\}}}||jjks8td|<05>d|jj<03><00><04><01>|dksHtd<04><01>||jks`t|||f<03><01>tjd|tj|d<05><04> d<03>}<07>fdd<07>t
|j j <0C>D<00>}tj |dd <09>} |j <0B>|<07>}
| dd<00>dd<00>dd<00>d|d
<00>fjdd <09>} |j <0B>| |
<00>} |j jD] } | | <0B>} q<>|j <0B>| <0B>} |j||jj| <0B>} | S) Nz"Cannot forward sequence of length z, block size is only rzcannot predict 0th codebook)<02>dtype<70>devicecs0g|](\}}|<02>dd<00>dd<00>|f<00><01>d<01><01>qS)Nr+)<01> unsqueeze)rOre<00>wte<74><01>idxr"r#rRvsz#FineGPT.forward.<locals>.<listcomp>r+r%r)rgr,rr_rrZr<00>arange<67>longrh<00> enumeraterarS<00>catrT<00>sumrUrVrWrcrb)rZpred_idxrkrg<00>b<>t<>codes<65>posZtok_embs<62>tok_emb<6D>pos_embr5<00>block<63>logitsr"rjr#r>ks*
<EFBFBD><12>

<EFBFBD> , 
 zFineGPT.forwardTcCsLtdd<02>|<00><01>D<00><01>}|rH|jjD]}||j<04><05>8}q"||jjj<04><05>8}|S)a8
Return the number of parameters in the model.
For non-embedding count (default), the position embeddings get subtracted.
The token embeddings would too, except due to the parameter sharing these
params are actually used as weights in the final layer, so we include them.
css|]}|<01><00>VqdSrE)<01>numel)rO<00>pr"r"r#<00> <genexpr><3E>sz)FineGPT.get_num_params.<locals>.<genexpr>)rp<00>
parametersrarSrdryrT)r<00> non_embedding<6E>n_paramsrir"r"r#<00>get_num_params<6D>s  zFineGPT.get_num_params)T)r@rArBrr>rrCr"r"r r#rKNs rKc@s&eZdZUdZeed<dZeed<dS)<06> FineGPTConfig<69>rZrrbN)r@rArBrZ<00>int<6E>__annotations__rbr"r"r"r#r<><00>s
 r<>)<12>__doc__<5F> dataclassesrr0r<00>torch.nnrrr2<00>modelrrr<00>ModulerrDrKr<>r"r"r"r#<00><module>s   1D